I’ve recently received an Oculus Rift Development Kit Mk. II, and since I’m on Linux, there is no official SDK for me and I’m pretty much out there on my own. But that’s OK; it’s given me a chance to experiment with the DK2 as a black box, and investigate some ways how I could support it in my VR toolkit under Linux, and improve Vrui’s user experience while I’m at it. And I also managed to score a genuine Oculus VR Latency Tester, and did a set of experiments with interesting results. If you just want to see those results, skip to the end.
The Woes of Windows
If you’ve been paying attention to the Oculus subreddit since the first DK2s have been delivered to developers/enthusiasts, there is a common consensus that the user experience of the DK2 and the SDK that drives it could be somewhat improved. Granted, it’s a developer’s kit and not a consumer product, but even developers seem to be spending more time getting the DK2 to run smoothly, or run at all, than actually developing for it (or at least that’s the impression I get from the communal bellyaching).
I have talkedmanytimes about the importance of eye tracking for head-mounted displays, but so far, eye tracking has been limited to the very high end of the HMD spectrum. Not anymore. SensoMotoric Instruments, a company with around 20 years of experience in vision-based eye tracking hardware and software, unveiled a prototype integrating the camera-based eye tracker from their existing eye tracking glasses with an off-the-shelf Oculus Rift DK1 HMD (see Figure 1). Fortunately for me, SMI were showing their eye-tracked Rift at the 2014 Augmented World Expo, and offered to bring it up to my lab to let me have a look at it.
Figure 1: SMI’s after-market modified Oculus Rift with one 3D eye tracking camera per eye. The current tracking cameras need square cut-outs at the bottom edge of each lens to provide an unobstructed view of the user’s eyes; future versions will not require such extensive modifications.
I just found an article about my 3D Video Capture with Three Kinects video on Discovery News (which is great!), but then I found Figure 1 in the “Related Gallery.” Oh, and they also had a link to another article titled “Virtual Reality Sex Game Set To Stimulate” right in the middle of my article, but you learn to take that kind of thing in stride.
Figure 1: Image in the “related gallery” on Discovery News. Original caption: “Apple has filed a patent for a holographic phone, a concept that sounds absolutely cool. We can’t wait. But what would it look like? A video created by animator Mike Ko, who has made animations for Google, Nike, Toyota, and NASCAR, gives us an idea. Check it out here”
Nope. Nope nope nope no. Where do I start? No, Apple has not filed a patent for a holographic phone. And even if Apple had, this is not what it would look like. I don’t want to rag on Mike Ko, the animator who created the concept video (watch it here, it’s beautiful). It’s just that this is not how holograms work. See Figure 2 for a very crude Photoshop (well, Gimp) job on what this would look like if such holographic screens really existed, and Figure 4 for an even cruder job of what the thing Apple actually patented would look like, if they were audacious enough to put it into an iPhone. Continue reading →
I just moved all my Kinects back to my lab after my foray into experimental mixed-reality theater a week ago, and just rebuilt my 3D video capture space / tele-presence site consisting of an Oculus Rift head-mounted display and three Kinects. Now that I have a new extrinsic calibration procedure to align multiple Kinects to each other (more on that soon), and managed to finally get a really nice alignment, I figured it was time to record a short video showing what multi-camera 3D video looks like using current-generation technology (no, I don’t have any Kinects Mark II yet). See Figure 1 for a still from the video, and the whole thing after the jump.
Figure 1: A still frame from the video, showing the user’s real-time “holographic” avatar from the outside, providing a literal kind of out-of-body experience to the user.
Using Vrui, implementing this was a piece of cake. Instead of modifying the existing Quikwrite tool, I created a new transformation tool that converts a two-axis analog joystick, e.g., a thumbstick on a game controller, to a virtual 6-DOF input device moving inside a flat square. Then, when binding the unmodified Quikwrite tool to that virtual input device, exactly the expected happens: the directions of the thumbstick translate 1:1 to the character selection regions of the Quikwrite square. I’m expecting that this new transformation tool will come in handy for other applications in the future, so that’s another benefit.
Text entry in virtual environments is one of those old problems that never seem to get solved. The core issue, of course, is that users in VR either don’t have keyboards (because they are in a CAVE, say), or can’t effectively use the keyboard they do have (because they are wearing an HMD that obstructs their vision). To the latter point: I consider myself a decent touch typist (my main keyboard doesn’t even have key labels), but the moment I put on an HMD, that goes out the window. There’s an interesting research question right there — do typists need to see their keyboards in their peripheral vision to use them, even when they never look at them directly? — but that’s a topic for another post.
Until speech recognition becomes powerful and reliable enough to use as an exclusive method (and even then, imagining having to dictate “for(int i=0;i<numEntries&&entries[i].key!=searchKey;++i)” already gives me a headache), and until brain/computer interfaces are developed and we plug our computers directly into our heads, we’re stuck with other approaches.
Unsurprisingly, the go-to method for developers who don’t want to write a research paper on text entry, but just need text entry in their VR applications right now, and don’t have good middleware to back them up, is a virtual 3D QWERTY keyboard controlled by a 2D or 3D input device (see Figure 1). It’s familiar, straightforward to implement, and it can even be used to enter text.
Figure 1: Guilty as charged — a virtual keyboard in the Vrui toolkit, implemented as a GLMotif pop-up window with rows and columns of buttons.
But that aside, what’s in the patent? The main figure in the application (see Figure 1) should already clue you in, if you read my pair of posts about the thankfully failed Holovision Kickstarter project. It’s a volumetric display of some unspecified sort (maybe a non-linear crystal? Or, if that fails, a rotating 2D display? Or “other 3D display technology?” Sure, why be specific! It’s only a patent! I suggest adding “holomatter” or “mass effect field” to the list, just to be sure.), placed inside a double parabolic mirror to create a real image of the volumetric display floating in air above the display assembly. Or, in other words, Project Vermeer. Now, I’m not a patent lawyer, but how Apple continues to file patents on the patently trivial (rounded corners, anyone?), or some exact thing that was shown by Microsoft in 2011, about a year before Apple’s patent was filed, is beyond me.
Figure 1: Main image from Apple’s patent application, showing the unspecified 3D image source (24) located inside the double-parabolic mirror, and the real 3D image of same (32) floating above the mirror. There is also some unspecified optical sensor (16) that may or may not let the user interact with the real 3D image in some unspecified way.