Virtual clay modeling with 3D input devices

It’s funny, suddenly the idea of virtual sculpting or virtual clay modeling using 3D input devices is popping up everywhere. The developers behind the Leap Motion stated it as their inspiration to develop the device in the first place, and I recently saw a demo video; Sony has recently been showing it off as a demo for the upcoming Playstation 4; and I’ve just returned from an event at the Sacramento Hacker Lab, where Intel was trying to get developers excited about their version of the Kinect, or what they call “perceptual computing.” One of the demos they showed was — guess it — virtual sculpting (one other demo was 3D video of a person embedded into a virtual office, now where have I seen that before?)

So I decided a few days ago to dust off an old toy application (I showed it last in my 2007 Wiimote hacking video), a volumetric virtual “clay” modeler with real-time isosurface extraction for visualization, and run it with a Razer Hydra controller, which supports bi-manual 6-DOF interaction, a pretty ideal setup for this sort of thing:

Continue reading

KeckCAVES on Mars, pt. 4

I mentioned before that we had a professional film crew in the CAVE a while back, to produce promotional video for the University of California‘s “Onward California” PR program. Finally, the finished videos have been posted on the Office of the President’s official YouTube channel. Unlike my own recent CAVE videos, these ones have excellent audio.

Figure 1: Dawn Sumner, member of the NASA Curiosity Mars rover mission’s science team, interacting with a life-size 3D model of the rover in the UC Davis KeckCAVES holographic display environment. Still image taken from “The surface of Mars.”

These short videos focus on Dawn Sumner, a professor in the UC Davis Department of Geology, and a KeckCAVES core member. This time, Dawn is wearing her hat as a planetary explorer and talking about NASA‘s Curiosity Mars rover mission, and her role in it.

Continue reading

On the road for VR part II: Tahoe Environmental Research Center, Incline Village, Lake Tahoe

We have been collaborating with the UC Davis Tahoe Environmental Research Center (TERC) for a long time. Back in — I think — 2006, we helped them purchase a large-screen stereoscopic projection system for the Otellini 3-D Visualization Theater and installed a set of Vrui-based KeckCAVES visualization applications for guided virtual tours of Lake Tahoe and the entire Earth. We have since worked on joint projects, primarily related to informal science education. Currently, TERC is one of the collaborators in the 3D lake science informal science education grant that spawned the Augmented Reality Sandbox.

The original stereo projection system, driven by a 2006 Mac Pro, was getting long in the tooth, and in the process of upgrading to higher-resolution and brighter projectors, we finally convinced the powers-that-be to get a top-of-the line Linux PC instead of yet another Mac (for significant savings, one might add). While the Ubuntu OS and Vrui application set had already been pre-installed by KeckCAVES staff in the home office, I still had to go up to the lake to configure the operating system and Vrui to render to the new projectors, update all Vrui software, align the projectors, and train the local docents in using Linux and the new Vrui application versions.

Continue reading

First VR environment in Estonia powered by Vrui

Now here’s some good news: I mentioned recently that reports of VR’s death are greatly exaggerated, and now I am happy to announce that researchers with the Institute of Cybernetics at Tallinn University of Technology have constructed the country’s first immersive display system, and I’m prowd to say it’s powered by the Vrui toolkit. The three-screen, back-projected display was entirely designed and built in-house. Its main designers, PhD student Emiliano Pastorelli and his advisor Heiko Herrmann, kindly sent several diagrams and pictures, see Figures 1, 2, 3, and 4.

Figure 1: Engineering diagram of Tallinn University of Technology’s new VR display, provided by Emiliano Pastorelli.

Continue reading

On the road for VR (sort of…): ILMF ’13, Denver, CO

I just returned from the 2013 International LiDAR Mapping Forum (ILMF ’13), where I gave a talk about LiDAR Viewer (which I haven’t previously written about here, but I really should). ILMF is primarily an event for industry exhibitors and LiDAR users from government agencies or private companies to meet. I only saw one other person from the academic LiDAR community there, and my talk stuck out like a sore thumb, too (see Figure 1).

Figure 1: Snapshot from towards the end of my talk at ILMF ’13, kindly provided by Marshall Millett. My talk was a bit off-topic for the rest of the conference, and at 8:30 in the morning, hopefully explaining the sparse audience.

Continue reading

… and they did!

build their own augmented reality sandboxes, that is.

We still haven’t installed the three follow-up AR sandboxes at the participating institutions of our informal science education NSF projectTahoe Environmental Research Center, Lawrence Hall of Science, and ECHO Lake Aquarium and Science Center — but others have picked up the slack and gone ahead and built their own, based on our software and designs.

Figure 1: Augmented reality sandbox constructed by “Code Red,” Ithaca High School’s FIRST Robotics Team 639, and shown here at the school’s open house on 02/02/2013.

The newest addition to my External Installations page is “Code Red,” Ithaca High School’s FIRST Robotics Team 639, who just unveiled theirs at their school’s open house (see Figure 1), and were kind enough to send a note and some pictures, with many more “behind the scenes” pictures on their sandbox project page. There’s an article in the local newspaper with more information as well.

Together with Bold Park Community School’s, this is the second unveiled AR sandbox that I’m aware of. That doesn’t sound like much, but the software hasn’t been out for that long, and there are a few others that I know are currently in the works. And who knows how many are being built or are already completed that I’m totally unaware of; after all, this is free software. Team 639’s achievement, for one, came completely out of the blue.

Update: And I missed this Czech project (no, not that other Czech project that gave us the idea in the first place!). They built several versions of the sandbox and showed them off at hacker meets. And they say they’re currently trying to port the software to lower-power computers. Good on them!

Update 2: One more I missed, this time done by/for the Undergraduate Library at the University of Illinois, Urbana-Champaign. I don’t have any more information; but this is the YouTube video.

I should point out that these last two were news to me; I only found out about them after googling for “AR sandbox.”

So please, if you did build one and don’t mind, send me a note. 🙂 There’s a ready-made box awaiting your input right there ↓↓↓↓

How head tracking makes holographic displays

I’ve talked about “holographic displays” a lot, most recently in my analysis of the upcoming zSpace display. What I haven’t talked about is how exactly such holographic displays work, what makes them “holographic” as opposed to just stereoscopic, and why that is a big deal.

Teaser: A user interacting with a virtual object inside a holographic display.

Continue reading

Kinect factory calibration

Boy, is my face red. I just uploaded two videos about intrinsic Kinect calibration to YouTube, and wrote two blog posts about intrinsic and extrinsic calibration, respectively, and now I find out that the factory calibration data I’ve always suspected was stored in the Kinect’s non-volatile RAM has actually been reverse-engineered. With the official Microsoft SDK out that should definitely not have been a surprise. Oh, well, my excuse is I’ve been focusing on other things lately.

So, how good is it? A bit too early to tell, because some bits and pieces are still not understood, but here’s what I know already. As I mentioned in the post on intrinsic calibration, there are several required pieces of calibration data:

  1. 2D lens distortion correction for the color camera.
  2. 2D lens distortion correction for the virtual depth camera.
  3. Non-linear depth correction (caused by IR camera lens distortion) for the virtual depth camera.
  4. Conversion formula from (depth-corrected) raw disparity values (what’s in the Kinect’s depth frames) to camera-space Z values.
  5. Unprojection matrix for the virtual depth camera, to map depth pixels out into camera-aligned 3D space.
  6. Projection matrix to map lens-corrected color pixels onto the unprojected depth image.

Continue reading

Multi-Kinect camera calibration

Intrinsic camera calibration, as I explained in a previous post, calculates the projection parameters of a single Kinect camera. This is sufficient to reconstruct color-mapped 3D geometry in a precise physical coordinate system from a single Kinect device. Specifically, after intrinsic calibration, the Kinect reconstructs geometry in camera-fixed Cartesian space. This means that, looking along the Kinect’s viewing direction, the X axis points to the right, the Y axis points up, and the negative Z axis points along the viewing direction (see Figure 1). The measurement unit for this coordinate system is centimeters.

Figure 1: Kinect’s camera-relative coordinate system after intrinsic calibration. Looking along the viewing direction, the X axis points to the right, the Y axis points up, and the Z axis points against the viewing direction. The unit of measurement is centimeters.

Continue reading

Kinect camera calibration

I finally managed to upload a pair of tutorial videos showing how to use the new grid-based intrinsic calibration procedure for the Kinect camera. The procedure made it into the Kinect package at least 1.5 years ago, but somehow I never found the time to explain it properly. Oh well. Here are the videos: Intrinsic Kinect Camera Calibration with Semi-transparent Grid and Intrinsic Kinect Camera Calibration Check.

Figure 1: The calibration target used for intrinsic camera calibration, as seen by the Kinect’s depth (left) and color cameras (right).

Continue reading