More on Desktop Embedding via VNC

I started regretting uploading my “Embedding 2D Desktops into VR” video, and the post describing it, pretty much right after I did it, because there was such an obvious thing to do, and I didn’t think of it.

Figure 1: Screenshot from video showing VR ProtoShop run simultaneously in a 3D environment created by an Oculus Rift and a Razer Hydra, and in a 2D environment using mouse and keyboard, brought into the 3D environment via the VNC remote desktop protocol.

In the old video, I pointed out that any desktop application can be run through the VNC client, therefore, clearly, the right thing to do would have been to run the same 3D application that’s already running in the Rift in desktop mode through VNC, for that extra Inception feel. So to correct that mistake, here is take two:

I’m using VR ProtoShop as example application here because it never gets much of the spotlight, and it’s another great example for the kind of interactive manipulations that are possible using 6-DOF input devices like the Razer Hydra (my go-to application for that, of course, is the Nanotech Construction Kit).

In the “main environment,” I’m using two 6-DOF controllers to move the protein model as a whole, and to select and drag individual components (alpha helices, beta strands or amorphous coil regions). Through VNC, I’m showing Vrui’s desktop user interface at the same time, which uses a mouse with a fairly standard virtual trackball to move the protein model as a whole, and 2D interactions on a 3D “drag box” to move protein parts.

Unlike using 6-DOF devices in the 3D world, dragging the drag box with a mouse is rather tedious. One can move the box in a plane by picking any face of the box, rotate around one of the main three axes by picking any of the twelve edges, or rotate freely around the box pivot by picking one of the box’s corners (although the virtual trackball supporting that is somewhat busted, so it ends up never being used).

But still, the 2D user interface works. Well enough, in fact, that researchers from Lawrence Berkeley National Laboratory and UC Berkeley used it to create hundreds (thousands?) of candidate protein structures for subsequent automatic optimization on parallel supercomputers for protein structure prediction competitions starting with CASP5 (2002).

And here’s the kicker: there is absolutely zero code in VR ProtoShop that depends on input devices. There are no code paths “if you have a mouse, do this… if you have a Hydra, do this… etc.” All of that stuff is handled completely transparently, at the toolkit level. That’s the beauty of developing on Vrui.

4 thoughts on “More on Desktop Embedding via VNC

    • I don’t know FoldIt very well, so this is conjecture. They tried to make the 2D interface as slick as possible, and that makes me think that interface aspects probably permeate through all levels of the software. That would make it hard to provide a proper 3D embedding. For example, in VR ProtoShop the step from desktop to VR automatically enables 6-DOF interaction, because VR ProtoShop leaves interaction to Vrui. But if FoldIt does interactions itself (as I assume), then a VR embedding would have to use 6-DOF devices to simulate a 2D mouse to feed into FoldIt, which then internally translates back from 2D to 3D. That approach usually works very poorly.

      A proper port would have to disentangle 3D interaction code from basic program logic and simulation, and that might be arbitrarily hard.

    • I need to write a larger article about this, but one interesting thing we’ve found is that it’s a lot easier to port VR software to the desktop than desktop software to VR, and the results work much better. VR software on the desktop looks and feels almost exactly like native desktop software, but the other way around it always feels like pounding a square peg into a round hole.

  1. Pingback: 2D Desktop Embedding via VNC | Doc-Ok.org

Leave a Reply

Your email address will not be published. Required fields are marked *