Remote Collaborative Immersive Visualization

I spent the last couple of days at the first annual meeting of “The Higher Education Campus Alliance for Advanced Visualization” (THE CAAV), where folks managing or affiliated with advanced visualization centers such as KeckCAVES came together to share their experiences. During the talks, I saw slides showing Vrui‘s Collaboration Infrastructure pop up here and there, and generally remote collaboration was a big topic of discussion. During breaks, I showed several people the following video on my smartphone (yes, I finally joined the 21st century), and afterwards realized that I had never written a post about this work, as most of it predates this blog. So here we go.

Continue reading

Optical Properties of Current VR HMDs

With the first commercial version of the Oculus Rift (Rift CV1) now trickling out of warehouses, and Rift DK2, HTC Vive DK1, and Vive Pre already being in developers’ hands, it’s time for a more detailed comparison between these head-mounted displays (HMDs). In this article, I will look at these HMDs’ lenses and optics in the most objective way I can, using a calibrated fish-eye camera (see Figures 1, 2, and 3).

Figure 1: Picture from a fisheye camera, showing a checkerboard calibration target displayed on a 30" LCD monitor.

Figure 1: Picture from a fisheye camera, showing a checkerboard calibration target displayed on a 30″ LCD monitor.

Figure 2: Same picture as Figure 1, after rectification. The purple lines were drawn into the picture by hand to show the picture's linearity after rectification.

Figure 2: Same picture as Figure 1, after rectification. The purple lines were drawn into the picture by hand to show the picture’s linearity after rectification.

Figure 3: Rectified picture from Figure 2, re-projected into stereographic projection to simplify measuring angles.

Figure 3: Rectified picture from Figure 2, re-projected into stereographic projection to simplify measuring angles. Concentric purple circles indicate 5-degree increments away from the projection center point.

Continue reading

Lasers Are Not Magic

“Can I make a full-field-of-view AR or VR display by directly shining lasers into my eyes?”

No.

Well, technically, you can, but not in the way you probably imagine if you asked that question. What you can’t do is mount some tiny laser emitter somewhere out of view, have it shine a laser directly into your pupil, and expect to get a virtual image covering your entire field of view (see Figure 1). Light, and your eyes, don’t work that way.

Figure 1: A magical retinal display using a tiny laser emitter somewhere off to the side of each eye. This doesn't work in reality.

Figure 1: A magical retinal display using a tiny laser emitter somewhere off to the side of each eye. This doesn’t work in reality. If a single beam of light entering the eye could be split up to illuminate large parts of the retina, real-world vision would not work.

Continue reading

Head-mounted Displays and Lenses

“It can’t be comfortable or healthy to stare at a screen a few inches in front of your eyes.”

The popularity of Google Cardboard, and the upcoming commercial releases of the Oculus Rift, HTC Vive, and other modern head-mounted displays (HMDs) have raised interest in virtual reality and VR devices in parts of the population who have never been exposed to, or had reason to care about, VR before. Together with the fact that VR, as a medium, is fundamentally different from other media with which it often gets lumped in, such as 3D cinema or 3D TV, this leads to a number of common misunderstandings and frequently-asked questions. Therefore, I am planning to write a series of articles addressing these questions one at a time.

First up: How is it possible to see anything on a screen that is only a few inches in front of one’s face?

Short answer: In HMDs, there are lenses between the screens (or screen halves) and the viewer’s eyes to solve exactly this problem. These lenses project the screens out to a distance where they can be viewed comfortably (for example, in the Oculus Rift CV1, the screens are rumored to be projected to a distance of two meters). This also means that, if you need glasses or contact lenses to clearly see objects several meters away, you will need to wear your glasses or lenses in VR.

Now for the long answer. Continue reading

On the road for VR: Redwood City, California

Last Friday I made a trek down to the San Francisco peninsula, to visit and chat with a couple of other VR folks: Cyberith, SVVR, and AltspaceVR. In the process, I also had the chance to try a couple of VR devices I hadn’t seen before.

Cyberith Virtualizer

Virtual locomotion, and its nasty side effect, simulator sickness, are a pretty persistent problem and timely topic with the arrival of consumer VR just around the corner. Many enthusiasts want to use VR to explore large virtual worlds, as in taking a stroll through the frozen tundra of Skyrim or the irradiated wasteland of Fallout, but as it turns out, that’s one of the hardest things to do right in VR.

Figure 1: Cyberith Virtualizer, driven by an experienced user (Tuncay Cakmak). Yes, you can jump and run, with some practice.

Continue reading

On the road for VR: Silicon Valley Virtual Reality Conference & Expo

Yesterday, I attended the second annual Silicon Valley Virtual Reality Conference & Expo in San Jose’s convention center. This year’s event was more than three times bigger than last year’s, with around 1,400 attendees and a large number of exhibitors.

Unfortunately, I did not have as much time as I would have liked to visit and try all the exhibits. There was a printing problem at the registration desk in the morning, and as a result the keynote and first panel were pushed back by 45 minutes, overlapping the expo time; additionally, I had to spend some time preparing for and participating in my own panel on “VR Input” from 3pm-4pm.

The panel was great: we had Richard Marks from Sony (Playstation Move, Project Morpheus), Danny Woodall from Sixense (STEM), Yasser Malaika from Valve (HTC Vive, Lighthouse), Tristan Dai from Noitom (Perception Neuron), and Jason Jerald as moderator. There was lively discussion of questions posed by Jason and the audience. Here’s a recording of the entire panel:

One correction: when I said I had been following Tactical Haptics‘ progress for 2.5 years, I meant to say 1.5 years, since the first SVVR meet-up I attended. Brainfart. Continue reading

Messing around with 3D video

We had a couple of visitors from Intel this morning, who wanted to see how we use the CAVE to visualize and analyze Big Datatm. But I also wanted to show them some aspects of our 3D video / remote collaboration / tele-presence work, and since I had just recently implemented a new multi-camera calibration procedure for depth cameras (more on that in a future post), and the alignment between the three Kinects in the IDAV VR lab’s capture space is now better than it has ever been (including my previous 3D Video Capture With Three Kinects video), I figured I’d try something I hadn”t done before, namely remotely interacting with myself (see Figure 1).

Figure 1: How to properly pat yourself on the back using time-delayed 3D video.

Continue reading

Fighting black smear

Now that I’ve gotten my Oculus Rift DK2 (mostly) working with Vrui under Linux, I’ve encountered the dreaded artifact often referred to as “black smear.” While pixels on OLED screens have very fast switching times — orders of magnitude faster than LCD pixels — they still can’t switch from on to off and back instantaneously. This leads to a problem that’s hardly visible when viewing a normal screen, but very visible in a head-mounted display due to a phenomenon called “vestibulo-ocular reflex.”

Basically, our eyes have built-in image stabilizers: if we move our head, this motion is detected by the vestibular apparatus in the inner ear (our “sense of equilibrium”), and our eyes automatically move the opposite way to keep our gaze fixed on a fixed point in space (interestingly, this even happens with the eyes closed, or in total darkness).

Figure 1: Black smear. It’s kinda like that.

Continue reading

3D Video Capture with Three Kinects

I just moved all my Kinects back to my lab after my foray into experimental mixed-reality theater a week ago, and just rebuilt my 3D video capture space / tele-presence site consisting of an Oculus Rift head-mounted display and three Kinects. Now that I have a new extrinsic calibration procedure to align multiple Kinects to each other (more on that soon), and managed to finally get a really nice alignment, I figured it was time to record a short video showing what multi-camera 3D video looks like using current-generation technology (no, I don’t have any Kinects Mark II yet). See Figure 1 for a still from the video, and the whole thing after the jump.

Figure 1: A still frame from the video, showing the user’s real-time “holographic” avatar from the outside, providing a literal kind of out-of-body experience to the user.

Continue reading

Quikwriting with a Thumbstick

In my previous post about gaze-directed Quikwriting I mentioned that the method should be well-suited to be mapped to a thumbstick. And indeed it is:

Using Vrui, implementing this was a piece of cake. Instead of modifying the existing Quikwrite tool, I created a new transformation tool that converts a two-axis analog joystick, e.g., a thumbstick on a game controller, to a virtual 6-DOF input device moving inside a flat square. Then, when binding the unmodified Quikwrite tool to that virtual input device, exactly the expected happens: the directions of the thumbstick translate 1:1 to the character selection regions of the Quikwrite square. I’m expecting that this new transformation tool will come in handy for other applications in the future, so that’s another benefit.

Continue reading