Mixed-reality recording, i.e., capturing a user inside of and interacting with a virtual 3D environment by embedding their real body into that virtual environment, has finally become the accepted method of demonstrating virtual reality applications through standard 2D video footage (see Figure 1 for a mixed-reality recording made in VR’s stone age). The fundamental method behind this recording technique is to create a virtual camera whose intrinsic parameters (focal length, lens distortion, …) and extrinsic parameters (position and orientation in space) exactly match those of the real camera used to film the user; to capture a virtual video stream from that virtual camera; and then to composite the virtual and real streams into a final video.
Figure 1: Ancient mixed-reality recording from inside a CAVE, captured directly on a standard video camera without any post-processing.
I know, the Oculus Rift DK2 is obsolete equipment, but nonetheless — there are a lot of them still out there, it’s still a decent VR headset for seated applications, I guess they’re getting cheaper on eBay now, and I put in all the work back then to support it in Vrui, so I might as well describe how to use it. If nothing else, the DK2 is a good way to watch DVD movies, or panoramic mono- or stereoscopic videos, in VR.
Figure 1: Using an Oculus Rift DK2 headset with a pair of Vive controllers — because why not?
Now that I’ve gotten my Oculus Rift DK2 (mostly) working with Vrui under Linux, I’ve encountered the dreaded artifact often referred to as “black smear.” While pixels on OLED screens have very fast switching times — orders of magnitude faster than LCD pixels — they still can’t switch from on to off and back instantaneously. This leads to a problem that’s hardly visible when viewing a normal screen, but very visible in a head-mounted display due to a phenomenon called “vestibulo-ocular reflex.”
Basically, our eyes have built-in image stabilizers: if we move our head, this motion is detected by the vestibular apparatus in the inner ear (our “sense of equilibrium”), and our eyes automatically move the opposite way to keep our gaze fixed on a fixed point in space (interestingly, this even happens with the eyes closed, or in total darkness).
Update: There have been complaints that the post below is an overly complicated and confusing explanation of the IPD measurement process. Maybe that’s so. Therefore, here’s the TL;DR version of how the process works. If you want to know why it works, read on below.
Stand in front of a mirror and hold a ruler up to your nose, such that the measuring edge runs directly underneath both your pupils.
Close your right eye and look directly at your left eye. Move the ruler such that the “0” mark appears directly underneath the center of your left pupil. Try to keep the ruler still for the next step.
Close your left eye and look directly at your right eye. The mark directly underneath the center of your right pupil is your inter-pupillary distance.
Here follows the long version:
I’ve recentlytalked about the importance of calibrating 3D displays, especially head-mounted displays, which have very tight tolerances. An important part of calibration is entering each user’s personal inter-pupillary distance. Even when using the eyeball center as projection focus point (as I describe in the second post linked above), the distance between the eyeballs’ centers is the same as the inter-pupillary distance.
So how do you actually go about determining your IPD? You could go to an optometrist, of course, but it turns out it’s very easy to do it accurately at home. As it so happened, I did go to an optometrist recently (for my annual check-up), and I asked him to measure my IPD as well while he was at it. I was expecting him to pull out some high-end gizmo, but instead he pulled up a ruler. So that got me thinking.
Figure 1: How to precisely measure infinity-converged inter-pupillary distance using only a mirror and a ruler. Focus on the left eye in step one and mark point A; focus on the right eye in step two and mark point B; the distance between points A and B is precisely the infinity-converged inter-pupillary distance (and also the eyeball center distance).
In my detailed how-to guide on installing and configuring Vrui for Oculus Rift and Razer Hydra, I did not talk about installing any actual applications (because I hadn’t released Vrui-3.0-compatible packages yet). Those are out now, so here we go.
If you happen to own a Kinect for Xbox (Kinect for Windows won’t work), you might want to install the Kinect 3D Video package early on. It can capture 3D (holographic, not stereoscopic) video from one or more Kinects, and either play it back as freely-manipulable virtual holograms, or it can, after calibration, produce in-system overlays of the real world (or both). If you already have Vrui up and running, installation is trivial.
If you are already running Linux, good for you. Skip the next paragraph.
If you don’t have Linux yet, go and grab it. I personally prefer Fedora, but it’s generally agreed that Ubuntu is the easiest to install for new Linux users, so let’s go with that. The Ubuntu installer makes it quite easy to install alongside an existing Windows OS on your system. Don’t bother installing Linux inside a virtual machine, though: that way Vrui won’t get access to your high-powered graphics cards, and performance will be abysmal. It won’t be able to talk to your Rift, either.
One of the first things to do after a fresh Linux install is to install the vendor-supplied drivers for your graphics card (if you don’t have a discrete Nvidia or ATI/AMD graphics card, go buy a GeForce!). Installing binary drivers is much easier these days. Here are instructions for Nvidia and ATI/AMD cards. If you happen to be on Fedora, enable the rpmfusion repositories and get the appropriate driver packages from there.
Intrinsic camera calibration, as I explained in a previous post, calculates the projection parameters of a single Kinect camera. This is sufficient to reconstruct color-mapped 3D geometry in a precise physical coordinate system from a single Kinect device. Specifically, after intrinsic calibration, the Kinect reconstructs geometry in camera-fixed Cartesian space. This means that, looking along the Kinect’s viewing direction, the X axis points to the right, the Y axis points up, and the negative Z axis points along the viewing direction (see Figure 1). The measurement unit for this coordinate system is centimeters.
Figure 1: Kinect’s camera-relative coordinate system after intrinsic calibration. Looking along the viewing direction, the X axis points to the right, the Y axis points up, and the Z axis points against the viewing direction. The unit of measurement is centimeters.
ShowEarthModel is one of the example programs shipped with the Vrui VR development toolkit. It draws a simple texture-mapped virtual globe, and can be used to visualize global geophysical data sets — specifically those containing subsurface data, as the globe can be drawn transparently. However, ShowEarthModel is not packaged with any data sets, primarily to keep the download size small, but also for licensing reasons. Out of the box, it only contains a fairly low-resolution color-mapped Earth topography texture (which can be changed, but that’s a topic for another post).
Since it’s one of the most common requests, here are the steps to download up-to-date earthquake data from the ANSS online catalog: