A HoloArticle

Here is an update on my quest to stay on top of all things “holo:” HoloLamp and RealView “Live Holography.” While the two have really nothing to do with each other, both claim the “holo” label with varying degrees of legitimacy, and happened to pop up recently.

HoloLamp

At its core, HoloLamp is a projection mapping system somewhat similar to the AR Sandbox, i.e., a combination of a set of cameras scanning a projection surface and a viewer’s face, and a projector drawing a perspective-correct image, from the viewer’s point of view, onto said projection surface. The point of HoloLamp is to project images of virtual 3D objects onto arbitrary surfaces, to achieve effects like the Millenium Falcon’s holographic chess board in Star Wars: A New Hope. Let’s see how it works, and how it falls short of this goal.

Creating convincing virtual three-dimensional objects via projection is a core technology of virtual reality, specifically the technology that is driving CAVEs and other screen-based VR displays. To create this illusion, a display system needs to know two things: the exact position of the projection surface in 3D space, and the position of the viewer’s eyes in the same 3D space. Together, these two provide just the information needed to set up the correct perspective projection. In CAVEs et al., the position of the screen(s) is fixed and precisely measured during installation, and the viewer’s eye positions are provided via real-time head tracking.

As one goal of HoloLamp is portability, it cannot rely on pre-installation and manual calibration. Instead, HoloLamp scans and creates a 3D model of the projection surface when turned on (or asked to do so, I guess). It does this by projecting a sequence of patterns, and observing the perspective distortion of those patterns with a camera looking in the projection direction. This is a solid and well-known technology called structured-light 3D scanning, and can be seen in action at the beginning of this HoloLamp video clip. To extract eye positions, HoloLamp uses an additional set of cameras looking upwards to identify and track the viewer’s face, probably using off-the-shelf face tracking algorithms such as the Viola-Jones filter. Based on that, the software can project 3D objects using one or more projection matrices, depending on whether the projection surface is planar or not. The result looks very convincing when shot through a regular video camera:

Continue reading

Lighthouse tracking examined

To my surprise and delight, I recently found out that Valve has been releasing Linux versions of most of their SteamVR/OpenVR run-time/SDK for a while now (OpenVR just hit version 1.0.0, go get it while it’s fresh). This is great news: it will allow me to port Vrui and all Vrui applications to the Vive headset and its tracked controllers in one fell swoop.

But before diving into developing a Lighthouse tracking driver plug-in for Vrui’s input device abstraction layer, I decided to cobble together a small testing utility to get a feel for OpenVR’s internal driver interface, and for the Lighthouse tracking system’s overall tracking quality.

Figure 1: The Lighthouse 6-DOF tracking system, disassembled.

Figure 1: The Lighthouse 6-DOF tracking system, disassembled (source).

Continue reading

Oculus Rift DK2’s tracking update rate

I’ve been involved in some arguments about the inner workings of the Oculus Rift’s and HTC/Valve Vive’s tracking systems recently, and while I don’t want to get into any of that right now, I just did a little experiment.

The tracking update rate of the Oculus Rift DK2, meaning the rate at which Oculus’ tracking driver sends different position/orientation estimates to VR applications, is 1000 Hz. However, the time between updates is 2ms, meaning that the driver updates the position/orientation, and then updates it again immediately afterwards, 500 times per second.

This is not surprising at all, given my earlier observation that the DK2 samples its internal IMU at a rate of 1000 Hz, and sends data packets containing 2 IMU samples each to the host at a rate of 500 Hz. The tracking driver is then kind enough to process these samples individually, and pass updated tracking data to applications after it’s done processing each one. That second part is maybe a bit superfluous, but I’ll take it.

Here is a (very short excerpt of a) dump from the test application I wrote:

0.00199484: -0.0697729, -0.109664, -0.458555
6.645e-06 : -0.0698003, -0.110708, -0.458532
0.00199313: -0.069828 , -0.111758, -0.45851
6.012e-06 : -0.0698561, -0.112813, -0.458488
0.00200075: -0.0698847, -0.113875, -0.458466
6.649e-06 : -0.0699138, -0.114943, -0.458445
0.0019885 : -0.0699434, -0.116022, -0.458427
5.915e-06 : -0.0699734, -0.117106, -0.45841
0.0020142 : -0.070004 , -0.118196, -0.458393
5.791e-06 : -0.0700351, -0.119291, -0.458377
0.00199589: -0.0700668, -0.120392, -0.458361
6.719e-06 : -0.070099 , -0.121499, -0.458345
0.00197487: -0.0701317, -0.12261 , -0.45833
6.13e-06  : -0.0701651, -0.123727, -0.458314
0.00301248: -0.0701991, -0.124849, -0.458299
5.956e-06 : -0.0702338, -0.125975, -0.458284
0.00099399: -0.0702693, -0.127107, -0.458269
5.971e-06 : -0.0703054, -0.128243, -0.458253
0.0019938 : -0.0703423, -0.129384, -0.458238
5.938e-06 : -0.0703799, -0.130529, -0.458223
0.00200243: -0.0704184, -0.131679, -0.458207
7.434e-06 : -0.0704576, -0.132833, -0.458191
0.0019831 : -0.0704966, -0.133994, -0.458179
5.957e-06 : -0.0705364, -0.135159, -0.458166
0.00199577: -0.0705771, -0.136328, -0.458154
5.974e-06 : -0.0706185, -0.137501, -0.458141

The first column is the time interval between each row and the previous row, in seconds. The second to fourth rows are the reported (x, y, z) position of the headset.

I hope this puts the myth to rest that the DK2 only updates its tracking data when it receives a new frame from the tracking camera, which is 60 times per second, and confirms that the DK2’s tracking is based on dead reckoning with drift correction. Now, while it is possible that the commercial version of the Rift does things differently, I don’t see a reason why it should.

PS: If you look closely, you’ll notice an outlier in rows 15 and 17: the first interval is 3ms, and the second interval is only 1ms. One sample missed the 1000 Hz sample clock, and was delivered on the next update.

On the road for VR: Silicon Valley Virtual Reality Conference & Expo

Yesterday, I attended the second annual Silicon Valley Virtual Reality Conference & Expo in San Jose’s convention center. This year’s event was more than three times bigger than last year’s, with around 1,400 attendees and a large number of exhibitors.

Unfortunately, I did not have as much time as I would have liked to visit and try all the exhibits. There was a printing problem at the registration desk in the morning, and as a result the keynote and first panel were pushed back by 45 minutes, overlapping the expo time; additionally, I had to spend some time preparing for and participating in my own panel on “VR Input” from 3pm-4pm.

The panel was great: we had Richard Marks from Sony (Playstation Move, Project Morpheus), Danny Woodall from Sixense (STEM), Yasser Malaika from Valve (HTC Vive, Lighthouse), Tristan Dai from Noitom (Perception Neuron), and Jason Jerald as moderator. There was lively discussion of questions posed by Jason and the audience. Here’s a recording of the entire panel:

One correction: when I said I had been following Tactical Haptics‘ progress for 2.5 years, I meant to say 1.5 years, since the first SVVR meet-up I attended. Brainfart. Continue reading

On the road for VR: Microsoft HoloLens at Build 2015, San Francisco

I have briefly mentioned HoloLens, Microsoft’s upcoming see-through Augmented Reality headset, in a previous post, but today I got the chance to try it for myself at Microsoft’s “Build 2015” developers’ conference. Before we get into the nitty-gritty, a disclosure: Microsoft invited me to attend Build 2015, meaning they waived my registration fee, and they gave me, like all other attendees, a free HP Spectre x360 notebook (from which I’m typing right now because my vintage 2008 MacBook Pro finally kicked the bucket). On the downside, I had to take Amtrak and Bart to downtown San Francisco twice, because I wasn’t able to get a one-on-one demo slot on the first day, and got today’s 10am slot after some finagling and calling in of favors. I guess that makes us even. šŸ˜›

So, on to the big question: is HoloLens real? Given Microsoft’s track record with product announcements (see 2009’s Project NatalĀ trailer and especially the infamous MiloĀ “demo”), there was some well-deserved skepticism regarding the HoloLens teaser released in January, and even the on-stage demo that was part of the Build 2015 keynote:

The short answer is: yes, it’s real, but… Continue reading

What is holographic, and what isn’t?

Microsoft just announced HoloLens, which “brings high-definition holograms to life in your world.” A little while ago, Google invested heavily in Magic Leap, who, in their own words, “bring magic back into the world.” A bit longer ago, CastAR promised “a magical experience of a 3D, holographic world.” Earlier than that, zSpace started selling displays they used to call “virtual holographic 3D.” Then there is the current trailblazer in mainstream virtual reality, the Oculus Rift, and other, older, VR systems such as CAVEs.

Figure 1: A real person next to two “holograms,” in a CAVE holographic display.

While these things are quite different from a technical point of view, from a user’s point of view, they have a large number of things in common. Wouldn’t it be nice to have a short, handy term that covers them all, has a well-matching connotation in the minds of the “person on the street,” and distinguishes these things from other things that might be similar technically, but have a very different user experience?

How about the term “holographic?” Continue reading

On the road for VR: Oculus Connect, Hollywood

After some initial uncertainty, and accidentally raising a stink on reddit, I did manage to attend Oculus Connect last weekend after all. I guess this is what a birthday bash looks like when the feted is backed by Facebook and gets to invite 1200 of his closest friends… and yours truly! It was nice to run into old acquaintances, meet new VR geeks, and it is still an extremely weird feeling to be approached by people who introduce themselves as “fans.” There were talks and panels, but I skipped most of those to take in demos and mingle instead; after all, I can watch a talk on YouTube from home just fine. Oh, and there was also new mobile VR hardware to check out, and a big surprise. Let’s talk VR hardware. Continue reading

An Eye-tracked Oculus Rift

I have talked many times about the importance of eye tracking for head-mounted displays, but so far, eye tracking has been limited to the very high end of the HMD spectrum. Not anymore. SensoMotoric Instruments, a company with around 20 years of experience in vision-based eye tracking hardware and software, unveiled a prototype integrating the camera-based eye tracker from their existing eye tracking glasses with an off-the-shelf Oculus Rift DK1 HMD (see Figure 1). Fortunately for me, SMI were showing their eye-tracked Rift at the 2014 Augmented World Expo, and offered to bring it up to my lab to let me have a look at it.

Figure 1: SMI’s after-market modified Oculus Rift with one 3D eye tracking camera per eye. The current tracking cameras need square cut-outs at the bottom edge of each lens to provide an unobstructed view of the user’s eyes; future versions will not require such extensive modifications.

Continue reading

On the road for VR: Silicon Valley Virtual Reality Conference & Expo

I just got back from the Silicon Valley Virtual Reality Conference & Expo in the awesome Computer History Museum in Mountain View, just across the street from Google HQ. There were talks, there were round tables, there were panels (I was on a panel on non-game applications enabled by consumer VR, livestream archive here), but most importantly, there was an expo for consumer VR hardware and software. Without further ado, here are my early reports on what I saw and/or tried.

Figure 1: Main auditorium during the “60 second” lightning pitches.

Continue reading

3D Video Capture with Three Kinects

I just moved all my Kinects back to my lab after my foray into experimental mixed-reality theater a week ago, and just rebuilt my 3D video capture space / tele-presence site consisting of an Oculus Rift head-mounted display and three Kinects. Now that I have a new extrinsic calibration procedure to align multiple Kinects to each other (more on that soon), and managed to finally get a really nice alignment, I figured it was time to record a short video showing what multi-camera 3D video looks like using current-generation technology (no, I don’t have any Kinects Mark II yet). See Figure 1 for a still from the video, and the whole thing after the jump.

Figure 1: A still frame from the video, showing the user’s real-time “holographic” avatar from the outside, providing a literal kind of out-of-body experience to the user.

Continue reading