So I decided a few days ago to dust off an old toy application (I showed it last in my 2007 Wiimote hacking video), a volumetric virtual “clay” modeler with real-time isosurface extraction for visualization, and run it with a Razer Hydra controller, which supports bi-manual 6-DOF interaction, a pretty ideal setup for this sort of thing:
Figure 1: Dawn Sumner, member of the NASA Curiosity Mars rover mission’s science team, interacting with a life-size 3D model of the rover in the UC Davis KeckCAVES holographic display environment. Still image taken from “The surface of Mars.”
The original stereo projection system, driven by a 2006 Mac Pro, was getting long in the tooth, and in the process of upgrading to higher-resolution and brighter projectors, we finally convinced the powers-that-be to get a top-of-the line Linux PC instead of yet another Mac (for significant savings, one might add). While the Ubuntu OS and Vrui application set had already been pre-installed by KeckCAVES staff in the home office, I still had to go up to the lake to configure the operating system and Vrui to render to the new projectors, update all Vrui software, align the projectors, and train the local docents in using Linux and the new Vrui application versions.
Now here’s some good news: I mentioned recently that reports of VR’s death are greatly exaggerated, and now I am happy to announce that researchers with the Institute of Cybernetics at Tallinn University of Technology have constructed the country’s first immersive display system, and I’m prowd to say it’s powered by the Vrui toolkit. The three-screen, back-projected display was entirely designed and built in-house. Its main designers, PhD student Emiliano Pastorelli and his advisor Heiko Herrmann, kindly sent several diagrams and pictures, see Figures 1, 2, 3, and 4.
Figure 1: Engineering diagram of Tallinn University of Technology’s new VR display, provided by Emiliano Pastorelli.
I just returned from the 2013 International LiDAR Mapping Forum (ILMF ’13), where I gave a talk about LiDAR Viewer (which I haven’t previously written about here, but I really should). ILMF is primarily an event for industry exhibitors and LiDAR users from government agencies or private companies to meet. I only saw one other person from the academic LiDAR community there, and my talk stuck out like a sore thumb, too (see Figure 1).
Figure 1: Snapshot from towards the end of my talk at ILMF ’13, kindly provided by Marshall Millett. My talk was a bit off-topic for the rest of the conference, and at 8:30 in the morning, hopefully explaining the sparse audience.
Figure 1: Augmented reality sandbox constructed by “Code Red,” Ithaca High School’s FIRST Robotics Team 639, and shown here at the school’s open house on 02/02/2013.
Together with Bold Park Community School’s, this is the second unveiled AR sandbox that I’m aware of. That doesn’t sound like much, but the software hasn’t been out for that long, and there are a few others that I know are currently in the works. And who knows how many are being built or are already completed that I’m totally unaware of; after all, this is free software. Team 639’s achievement, for one, came completely out of the blue.
I’ve talked about “holographic displays” a lot, most recently in my analysis of the upcoming zSpace display. What I haven’t talked about is how exactly such holographic displays work, what makes them “holographic” as opposed to just stereoscopic, and why that is a big deal.
Teaser: A user interacting with a virtual object inside a holographic display.
Boy, is my face red. I just uploaded two videos about intrinsic Kinect calibration to YouTube, and wrote two blog posts about intrinsic and extrinsic calibration, respectively, and now I find out that the factory calibration data I’ve always suspected was stored in the Kinect’s non-volatile RAM has actually been reverse-engineered. With the official Microsoft SDK out that should definitely not have been a surprise. Oh, well, my excuse is I’ve been focusing on other things lately.
So, how good is it? A bit too early to tell, because some bits and pieces are still not understood, but here’s what I know already. As I mentioned in the post on intrinsic calibration, there are several required pieces of calibration data:
2D lens distortion correction for the color camera.
2D lens distortion correction for the virtual depth camera.
Non-linear depth correction (caused by IR camera lens distortion) for the virtual depth camera.
Conversion formula from (depth-corrected) raw disparity values (what’s in the Kinect’s depth frames) to camera-space Z values.
Unprojection matrix for the virtual depth camera, to map depth pixels out into camera-aligned 3D space.
Projection matrix to map lens-corrected color pixels onto the unprojected depth image.
Intrinsic camera calibration, as I explained in a previous post, calculates the projection parameters of a single Kinect camera. This is sufficient to reconstruct color-mapped 3D geometry in a precise physical coordinate system from a single Kinect device. Specifically, after intrinsic calibration, the Kinect reconstructs geometry in camera-fixed Cartesian space. This means that, looking along the Kinect’s viewing direction, the X axis points to the right, the Y axis points up, and the negative Z axis points along the viewing direction (see Figure 1). The measurement unit for this coordinate system is centimeters.
Figure 1: Kinect’s camera-relative coordinate system after intrinsic calibration. Looking along the viewing direction, the X axis points to the right, the Y axis points up, and the Z axis points against the viewing direction. The unit of measurement is centimeters.