About okreylos

I am a research computer scientist at the University of California, Davis. My research areas are scientific visualization, particularly in immersive ("virtual reality") environments, human/computer interaction in immersive environments, and 3D computer graphics. My primary work is software development, from architecture over design to implementation and coding. I am the primary software developer for the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). Some of my released packages are Vrui (a VR development toolkit), CollaborationInfrastructure (a tele-collaboration plug-in for Vrui), Kinect (a driver suite to capture 3D video from Microsoft Kinect cameras), LiDAR Viewer (a visualization package for very large 3D point clouds), 3D Visualizer (a system for interactive visual analysis of 3D volumetric data), Nanotech Construction Kit (an interactive molecular design program), and SARndbox (an augmented reality sandbox). I also dabble in VR hardware, in the sense that I take existing custom or commodity hardware components (3D TVs, head-mounted displays, projectors, tracking systems, Wiimotes, Kinect cameras, ...) and build fully integrated immersive environments out of them. This includes a fair share of driver development to support hardware that either doesn't have drivers, or whose vendor-supplied drivers are not up to par.

Kinect factory calibration

Boy, is my face red. I just uploaded two videos about intrinsic Kinect calibration to YouTube, and wrote two blog posts about intrinsic and extrinsic calibration, respectively, and now I find out that the factory calibration data I’ve always suspected was stored in the Kinect’s non-volatile RAM has actually been reverse-engineered. With the official Microsoft SDK out that should definitely not have been a surprise. Oh, well, my excuse is I’ve been focusing on other things lately.

So, how good is it? A bit too early to tell, because some bits and pieces are still not understood, but here’s what I know already. As I mentioned in the post on intrinsic calibration, there are several required pieces of calibration data:

  1. 2D lens distortion correction for the color camera.
  2. 2D lens distortion correction for the virtual depth camera.
  3. Non-linear depth correction (caused by IR camera lens distortion) for the virtual depth camera.
  4. Conversion formula from (depth-corrected) raw disparity values (what’s in the Kinect’s depth frames) to camera-space Z values.
  5. Unprojection matrix for the virtual depth camera, to map depth pixels out into camera-aligned 3D space.
  6. Projection matrix to map lens-corrected color pixels onto the unprojected depth image.

Continue reading

GPU performance: Nvidia Quadro vs Nvidia GeForce

One of the mysteries of the modern age is the existence of two distinct lines of graphics cards by the two big manufacturers, Nvidia and ATI/AMD. There are gamer-level cards, and professional-level cards. What are their differences? Obviously, gamer-level cards are cheap, because the companies face stiff competition from each other, and want to sell as many of them as possible to make a profit. So, why are professional-level cards so much more expensive? For comparison, an “entry-level” $700 Quadro 4000 is significantly slower than a $530 high-end GeForce GTX 680, at least according to my measurements using several Vrui applications, and the closest performance-equivalent to a GeForce GTX 680 I could find was a Quadro 6000 for a whopping $3660. Granted, the Quadro 6000 has 6GB of video RAM to the GeForce’s 2GB, but that doesn’t explain the difference.

Continue reading

Multi-Kinect camera calibration

Intrinsic camera calibration, as I explained in a previous post, calculates the projection parameters of a single Kinect camera. This is sufficient to reconstruct color-mapped 3D geometry in a precise physical coordinate system from a single Kinect device. Specifically, after intrinsic calibration, the Kinect reconstructs geometry in camera-fixed Cartesian space. This means that, looking along the Kinect’s viewing direction, the X axis points to the right, the Y axis points up, and the negative Z axis points along the viewing direction (see Figure 1). The measurement unit for this coordinate system is centimeters.

Figure 1: Kinect’s camera-relative coordinate system after intrinsic calibration. Looking along the viewing direction, the X axis points to the right, the Y axis points up, and the Z axis points against the viewing direction. The unit of measurement is centimeters.

Continue reading

Kinect camera calibration

I finally managed to upload a pair of tutorial videos showing how to use the new grid-based intrinsic calibration procedure for the Kinect camera. The procedure made it into the Kinect package at least 1.5 years ago, but somehow I never found the time to explain it properly. Oh well. Here are the videos: Intrinsic Kinect Camera Calibration with Semi-transparent Grid and Intrinsic Kinect Camera Calibration Check.

Figure 1: The calibration target used for intrinsic camera calibration, as seen by the Kinect’s depth (left) and color cameras (right).

Continue reading

Slow approval of / replies to comments

Aside

I’d like to apologize in advance to everyone who posts a comment here. While I very much appreciate them, and will approve them and reply to them as quickly as I can, my blog has recently been discovered by link farmers, and I am getting a HUGE amount of comment spam. So please be patient as I’m trying to remedy the situation. Thanks!

Low-cost 3D displays using Razer Hydra devices

I’ve previously written about our low-cost VR environments based on 3D TVs and optical tracking. While “low-cost” compared to something like a CAVE, they are still not exactly cheap (around $7000 all told), and not exactly easy to install.

What I haven’t mentioned before is that we have an even lower-cost, and, more importantly, easier to install, alternative using just a 3D TV and a Razer Hydra gaming input device. These environments are not holographic because they don’t have head tracking, but they are still very usable for a large variety of 3D applications. We have several of these systems in production use, and demonstrated them to the public twice, in our booth at the 2011 and 2012 AGU fall meetings. What we found there is that the environments are very easy to use; random visitors walking into our booth and picking up the controllers were able to control fairly complex software in a matter of minutes.

A user controlling a low-cost 3D display (running the Nanotech Construction Kit) with a Razer Hydra 6-DOF tracked input device.

Continue reading

KeckCAVES on Mars, pt. 3

Yesterday, Wednesday, 01/09/2013, Michael Meyer, the lead scientist on NASA’s Mars exploration mission, which includes the ongoing Curiosity rover mission, visited UC Davis as a guest of Dawn Sumner‘s, the KeckCAVES scientist working on that same mission. Dr. Meyer held a seminar in the Geology department, and also gave an interview to one of our local newspapers, the Sacramento Bee.

As part of this visit, Dawn showed him the CAVE, and the Mars-related visualization work we have been doing, including Crusta Mars and our preliminary work with a highly detailed 3D model of the Curiosity rover.

I’m still on vacation, so I missed the visit. Bummer. 🙁

Downloading earthquake datasets for ShowEarthModel

ShowEarthModel is one of the example programs shipped with the Vrui VR development toolkit. It draws a simple texture-mapped virtual globe, and can be used to visualize global geophysical data sets — specifically those containing subsurface data, as the globe can be drawn transparently. However, ShowEarthModel is not packaged with any data sets, primarily to keep the download size small, but also for licensing reasons. Out of the box, it only contains a fairly low-resolution color-mapped Earth topography texture (which can be changed, but that’s a topic for another post).

Since it’s one of the most common requests, here are the steps to download up-to-date earthquake data from the ANSS online catalog:

  1. Continue reading

Seeing “The Hobbit” in 3D

I’m on vacation in Mexico right now, and yesterday evening my brother-in-law took my wife and me to see “The Hobbit,” in 3D, in quite the fancy movie theater, with reclining seats and footrests and to-the-seat service and such.

I don’t want to talk about the movie per se, short of mentioning that I liked it, a lot, but about the 3D. Or the “stereo,” I should say, as I mentioned previously. My overall impression was that it was done very well. Obviously, the movie was shot in stereo (otherwise I’d have refused to see it that way), and obviously a lot of planning went into that aspect of it. There was also no apparent eye fatigue, or any other typical side effect of bad stereo, and considering how damn long the movie was, and that I was consciously looking for conversion problems or artifacts, that means someone was doing something right. As a technical note to cinemas: there was a dirty spot on the screen, a bit off to the side (looked as if someone had thrown a soda at the screen a while ago), and that either degraded the screen polarization, or was otherwise slightly visible in the image, and was a bit distracting. So, keep your stereo screens immaculately clean! Another very slightly annoying thing was due to the subtitles (the entire movie was shown in English with Spanish subtitles, and then there were the added subtitles when characters spoke Elvish or the Dark Tongue), and even though I didn’t read the subtitles, I still automatically looked at them whenever they popped up, and that was distracting because they were sticking out from the screen quite a bit.

Continue reading

Visualizing the Sutter’s Mill meteorite

If you live in California, you probably recall the minivan-sized meteoroid that went kablooey over Northern California on April 22, 2012. In the months following the event, many meteorite pieces were collected and analyzed using a variety of physical and chemical means. Prof. Qing-zhu Yin of the UC Davis Department of Geology has been involved in the meteorite hunt from the start, and analyzed many pieces in his lab. He also collaborated with the UC Davis Center for Molecular and Genomic Imaging, where meteorite fragments were scanned using high-resolution X-ray computed tomography (CT) scanners, and the UC Davis McClellan Nuclear Research Center, where fragments were scanned using neutron beam CT scanners.

Qin-zhu, and a small army of other researchers, just published a Science paper about their work on the meteorite. Qin-zhu then asked me to create a few short movies showing 3D visualizations of several of those scans, from both flavors of CT, to go along with the release of the Science paper on 12/20/2012. I used our 3D Visualizer software, which is originally aimed at immersive environments such as CAVEs, but works well on desktop workstations as well, to load the 3D data sets, and visualize them using direct volume rendering.

Continue reading