Immersive visualization for archaeological site preservation

Have I mentioned lately that VR is not dead yet, and instead thinks it’ll be going for a walk? Here’s more proof. One of KeckCAVES‘ external users, Marshall Millett, archaeologist and GIS expert, is using high-resolution 3D scanning, based on LiDAR or white-light scanning, to capture and digitally preserve cultural heritage sites, such as the Maidu Indian Museum’s historic site and trail (close to Roseville, CA).

Figure 1: Danny Rey, Tribal Historic Preservation Officer, and Marcos Guerrero, Cultural Resources Manager, representatives of the United Auburn Indian Community, viewing a high-resolution 3D scan of the Maidu Historic Trail and Site in the KeckCAVES immersive visualization facility. In the background Joe Dumit of UC Davis’ Science and Technology Studies, and myself. Photo provided by Marshall Millett.

Marshall has been using KeckCAVES software, particularly LiDAR Viewer (about which I should really write a post), and also the KeckCAVES facility itself and related technology, to visualize his high-resolution 3D models at 1:1 scale, and with the ability to experience them in ways that are not normally possible (most of these sites are fragile and/or sacred, and not available to the public). Part of this work were several visits of community representatives to the KeckCAVES facility, to view their digitally reconstructed historic site (see Figure 1).

Marshall presented a poster about his work at last year’s 3D Digital Documentation Summit, held July 10-12, 2012 at the Presidio, San Fransisco, CA, and was just interviewed for a podcast by the National Center for Preservation Technology and Training (where, as of 02/21/2013, KeckCAVES prominently features on the front page).

First VR environment in Estonia powered by Vrui

Now here’s some good news: I mentioned recently that reports of VR’s death are greatly exaggerated, and now I am happy to announce that researchers with the Institute of Cybernetics at Tallinn University of Technology have constructed the country’s first immersive display system, and I’m prowd to say it’s powered by the Vrui toolkit. The three-screen, back-projected display was entirely designed and built in-house. Its main designers, PhD student Emiliano Pastorelli and his advisor Heiko Herrmann, kindly sent several diagrams and pictures, see Figures 1, 2, 3, and 4.

Figure 1: Engineering diagram of Tallinn University of Technology’s new VR display, provided by Emiliano Pastorelli.

Continue reading

AR Sandbox news

The first “professionally built” AR sandbox, whose physical setup was designed and built by the fine folks from the San Francisco Exploratorium, arrived at its new home at ECHO Lake Aquarium and Science Center.

Figure 1: Picture of ECHO Lake Aquarium and Science Center’s Augmented Reality Sandbox during installation on the exhibit floor. Note the portrait orientation of the sand table with respect to the back panel, the projector tilt to make up for it, and the high placement of the Kinect camera (visible at the very top of the picture). Photo provided by Travis Cook, ECHO.

Continue reading

On the road for VR (sort of…): ILMF ’13, Denver, CO

I just returned from the 2013 International LiDAR Mapping Forum (ILMF ’13), where I gave a talk about LiDAR Viewer (which I haven’t previously written about here, but I really should). ILMF is primarily an event for industry exhibitors and LiDAR users from government agencies or private companies to meet. I only saw one other person from the academic LiDAR community there, and my talk stuck out like a sore thumb, too (see Figure 1).

Figure 1: Snapshot from towards the end of my talk at ILMF ’13, kindly provided by Marshall Millett. My talk was a bit off-topic for the rest of the conference, and at 8:30 in the morning, hopefully explaining the sparse audience.

Continue reading

Is VR dead?

No, and it doesn’t even smell funny.

But let’s back up a bit. When it comes to VR, there are three prevalent opinions:

  1. It’s a dead technology. It had its day in the early nineties, and there hasn’t been anything new since. After all, the CAVE was invented in ’91 and is basically still the same, and head-mounted displays have been around even longer.
  2. It hasn’t been born yet. But maybe if we wait 10 more years, and there are some significant breakthroughs in display and computer technology, it might become interesting or feasible.
  3. It’s fringe technology. Some weirdos keep picking at it, but it hasn’t ever led to anything interesting or useful, and never will.

Continue reading

KeckCAVES on Mars, pt. 3

Yesterday, Wednesday, 01/09/2013, Michael Meyer, the lead scientist on NASA’s Mars exploration mission, which includes the ongoing Curiosity rover mission, visited UC Davis as a guest of Dawn Sumner‘s, the KeckCAVES scientist working on that same mission. Dr. Meyer held a seminar in the Geology department, and also gave an interview to one of our local newspapers, the Sacramento Bee.

As part of this visit, Dawn showed him the CAVE, and the Mars-related visualization work we have been doing, including Crusta Mars and our preliminary work with a highly detailed 3D model of the Curiosity rover.

I’m still on vacation, so I missed the visit. Bummer. 🙁

Visualizing the Sutter’s Mill meteorite

If you live in California, you probably recall the minivan-sized meteoroid that went kablooey over Northern California on April 22, 2012. In the months following the event, many meteorite pieces were collected and analyzed using a variety of physical and chemical means. Prof. Qing-zhu Yin of the UC Davis Department of Geology has been involved in the meteorite hunt from the start, and analyzed many pieces in his lab. He also collaborated with the UC Davis Center for Molecular and Genomic Imaging, where meteorite fragments were scanned using high-resolution X-ray computed tomography (CT) scanners, and the UC Davis McClellan Nuclear Research Center, where fragments were scanned using neutron beam CT scanners.

Qin-zhu, and a small army of other researchers, just published a Science paper about their work on the meteorite. Qin-zhu then asked me to create a few short movies showing 3D visualizations of several of those scans, from both flavors of CT, to go along with the release of the Science paper on 12/20/2012. I used our 3D Visualizer software, which is originally aimed at immersive environments such as CAVEs, but works well on desktop workstations as well, to load the 3D data sets, and visualize them using direct volume rendering.

Continue reading

Immersive visualization of past ocean flow patterns

We are currently involved in an NSF-funded project to study the changes in global ocean flow patterns in response to past climate change, specifically the difference in flow patterns between the last glacial maximum (otherwise known as the “Ice Age”, ~25000 years ago) and the Holocene (otherwise known as “today”).

In layman’s terms, the basic idea is to use differences in the chemical composition, particularly the abundance of isotopes of carbon (13C) and oxygen (18O), of benthic core samples collected from the ocean floor all around the world to establish correlations between sampling sites, and from that derive a global flow model that best explains these correlations. (By the way, 13C is not the carbon isotope used in radiocarbon dating; that honor goes to 14C).

This is a multi-institution collaborative project. The core sample isotope ratios are collected and collated by Lorraine Lisiecki and her graduate students at UC Santa Barbara, and the mathematical method to reconstruct flow patterns based on those samples is developed by Jake Gebbie at Woods Hole Oceanographic Institution. Howard Spero at UC Davis is the overall principal investigator of the project, and UC Davis’ contribution is visualization and analysis software, building on the strengths of the KeckCAVES project. I’ve posted previously about our efforts to construct low-cost immersive display systems at our collaborators’ sites so that they can use the visualization software developed by us in its native habitat, and also collaborate with us and each other remotely in real-time using Vrui’s collaboration infrastructure.

So here is the first major piece of visualization software developed specifically for this project. It was developed by Rolf Westerteiger, a visiting PhD student from Germany, based on the Vrui VR toolkit. Here is Rolf himself, using his application in the CAVE:

PhD student Rolf Westerteiger using his immersive visualization application in the KeckCAVES CAVE.

This application reads a database of core sample compositions created by Lorraine Lisiecki, and a reconstructed 3D flow field created by Jake Gebbie, and puts both into a global three-dimensional context. The software shows a block model of the Earth’s global ocean floor (at the same resolution as the 3D flow field, and vertically exaggerated by a significant factor), and allows a user to interactively query and explore the 3D flow.

The primary flow visualization method is line integral convolution (LIC), which creates dense and intuitive visualizations of complex flows. As LIC works best when applied to 2D surfaces instead of 3D volumes, Rolf’s application is based on a set of interactively controllable surfaces (one sphere of constant depth, two cones of constant latitude, two semicircles of constant longitude) which slice through the implicitly-defined 3D LIC volume. To indicate flow direction, the LIC texture is animated by cycling through a phase offset, and color-coded by either flow velocity or water temperature.

The special thing about this LIC visualization is that the LIC textures are not pre-computed, but generated in real time using the GPU and a set of GLSL shaders. This allows for even more interactive exploration than shown in this first result; a user could specify arbitrary slicing surfaces using tracked 3D input devices, and see the LIC pattern displayed on those surfaces immediately. From our experience with the 3D Visualizer software, which is based on very similar principles, we believe that this will lead to a very powerful exploratory tool.

A secondary flow visualization method are tracer particles, which can be injected into the global ocean at arbitrary positions using a tracked 3D input device, and leave behind a trail of their past positions. Together, these two methods provide rich insight into the structure of these reconstructed flows, and especially their evolution over geologic time.

A third visualization method is used to put the raw data that were used to create the flow models into context. A set of labels, one for each core sample in the database, and each showing the relative abundance of the important isotope ratios, are mapped onto the virtual globe at their proper positions to enable visual inspection of the flow reconstruction method.

Unfortunately, Rolf had to return to Germany before we were able to film a video showing off all features of his visualization application, so I had to make a video with myself standing in for him:

The next development steps are to replace the ocean floor block model read from the flow file with a high-resolution bathymetry model (see below), and to integrate the visualization application with Vrui’s remote collaboration infrastructure such that it can be used by all collaborators for virtual joint data exploration sessions.

Global high-resolution bathymetry model at 75x vertical exaggeration. View is centered on Northern Atlantic.

KeckCAVES on Mars, pt. 2

I’ve already mentioned KeckCAVES‘ involvement in NASA‘s newest Mars mission, the Mars Science Laboratoryin a previous post, but now I have an update. Dawn Sumner, UC Davis‘ member of the Curiosity science team, was interviewed last week for “Onward California,” which I guess is some new system-wide outreach and public relations effort to get the public’s mind off last fall’s “unpleasantries.” Just kidding UC, you know I love you.

Anyway… Dawn decided that the best way to talk about her work on Mars would be to do the interview in the CAVE, showing how our software, particularly Crusta Mars, was used during the planning stages of the mission, specifically landing site selection. I then suggested that it would be really nice to do part of the interview about the rover itself, using a life-size and high-resolution 3D model of the rover. So Dawn went to her contacts at the Jet Propulsion Laboratory, and managed to get us a very detailed 3D model, made of several million polygons and high-resolution textures, to load into the CAVE.

What someone posing with a life-size 3D model of the Mars Curiosity rover might look like.

As it so happens, I have a 3D mesh viewer that was able to load and render the model (which came in Alias|Wavefront OBJ format), with some missing features, specifically no specularity and bump mapping. The renderer is fast enough to draw the full, undecimated mesh at sufficient frame rate for immersive display, around 30 frames per second.

The next problem, then, was how to film the beautiful rover model in the CAVE without making it look like garbage, another topic about which I’ve posted before. The film team, from the Department of the 4th Dimension, fortunately was on board, and filmed the interview in several segments, using hand-held and static camera setups.

We have pretty much figured out how to film hand-held video using a secondary head tracker attached to the camera, but static setups where the camera is outside the CAVE, and hence outside the tracking system’s range, always take a lot of trial and error to set up. For good video quality, one has to precisely measure the 3D position of the camera lens relative to the CAVE and then configure that in the CAVE software.

Previously, I used to do that by guesstimating the camera position, entering the values into the configuration file, and then using a Vrui calibration utility to visually judge the setup’s correctness. This involves looking at the image and why it’s wrong, mentally changing the camera position to correct for the wrongness, editing the configuration file, and repeating the whole process until it looks OK. Quite annoying that, especially if there’s an entire film crew sitting in the room checking their watches and rolling their eyes.

After that filming session, I figured that Vrui could use a more interactive way of setting up CAVE filming, a user interface to set up and configure several different filming modes without having to leave a running application. So I added a “filming support” vislet, and to properly test it, filmed myself posing and playing with the Curiosity rover (MSL Design Courtesy NASA/JPL-Caltech):

Pay particular attention to the edges and corners of the CAVE, and how the image of the 3D model and the image backdrop seamlessly span the three visible CAVE screens (left, back, floor). That’s what a properly set up CAVE video is supposed to look like. Also note that I set up the right CAVE wall to be rendered for my own point of view, in stereo, so that I could properly interact with the 3D model and knew what I was pointing at. Without such a split-CAVE setup, it’s very hard to use the CAVE when in filming mode.

The filming support vislet supports head-tracked recording, static recording, split-CAVE recording (where some screens are rendered for the user, and some for the camera), setting up custom light sources, and a draggable calibration grid and input device markers to simplify calibrating a static camera setup when the camera is outside the tracking system’s range and cannot be measured directly.

All in all, it works quite well, and is a significant improvement over the previous setup method. It is now possible to change filming modes and camera setups from within a running application, without having to exit, edit configuration files, and restart.

On the road for VR part I: UC Santa Barbara

I got interested in remote collaboration because I hate traveling, so it’s somewhat funny that I’ll be traveling all over the place in the near future to install remote collaboration-capable immersive display systems. I guess I brought it upon myself.

The first stop in the grand low-cost VR world tour was UC Santa Barbara, where I just finished installing a system following our blueprint, with some updates to account for the inexorable march onwards of technology. Lorraine Lisiecki, in the UC Santa Barbara department of Earth Science, is one of the collaborators in our NSF grant on studying paleoclimate, and leading one of the two remote sites that will be equipped with one of our “holo-phone” prototypes, the other one being Woods Hole Oceanographic Institution on Cape Cod, where I’ll travel next.

Here’s a video showing what it’s like to use a “holo-phone:”

In more detail, I installed a Sharp Aquos 70″ LCD 3D TV, a NaturalPoint OptiTrack 3-camera optical tracking system using their TrackingTools software, a Nintendo Wiimote as input device, and a PC with an Intel Core i7 “Ivy Bridge” CPU, an Nvidia GeForce GTX 680, 8GB RAM, a 128GB SSD, and Fedora 17 64-bit to run the display, and a little Dell with Windows 7 to run the tracking system (unfortunately, the tracking software neither runs under Wine nor in a virtual machine).

One of the biggest concerns going into this installation was how we would fit a 70″ 3D TV into a small faculty office. It’s not just that the TV itself is huge, but the tracking system cameras need to be mounted relatively far away from the TV so that their tracking volume covers the entire workspace in front of the TV. In the past, I have relied on high ceilings and mounted the cameras straight above the left and right edges of the TV and above the center, but that wasn’t an option in this case due to Lorraine’s office’s low 8.5 ft ceilings.

My solution was to push the left and right cameras further out, so that they look diagonally across the TV screen instead of straight down (see Figure 1). That turned out to cause more problems when designing “tracking antlers” for the tracked 3D glasses and the Wiimote, but it really helped increasing the tracking volume — we managed to track the entire screen surface — and had the side benefit of slightly increasing tracking accuracy due to the larger stereo reconstruction baseline.

Figure 1: Position of the three OptiTrack cameras above the TV. Unlike in other installations, I pushed the cameras out to the sides to get a larger tracking volume in spite of the low ceiling.

Another, secondary, worry was the stereo quality of the TV. I had only seen this particular TV model in a store before, but due to the environment, and low-depth and low-contrast 3D content typically shown in store demos, it is impossible to judge how well a 3D TV will actually work once installed.

Stereo quality is primarily determined by the amount of cross-talk (or ‘ghosting”) between the left and right-eye views. Noticeable cross-talk interferes with the brain’s ability to fuse stereo pairs into 3D perception.  LCD-based active-stereo 3D TVs such as this Sharp are problematic, because LCD pixels are relatively slow to switch from a full-on to a full-off state. In a high-contrast stereo pair, a pixel might have to switch from full-black to full-white 120 times per second. While that doesn’t sound so bad, one has to consider that in order to avoid cross-talk, a pixel that is white in one view and black in the other must look exactly as black as its neighboring pixel that’s black in both images, and as white as its other neighboring pixel that’s white in both images. Meaning that the pixel must have completed its switch during the short time the active stereo glasses do their own switch from opaque to transparent, or there will be a perceived brightness difference. The bottom line is that LCDs simply can’t do it, and the resulting stereo quality differs unpredicably between manufacturers, models, and potentially even individual units in the same model line. The work-arounds common in 3D movies, low contrast and limited eye separation, don’t work in immersive graphics because eye separation is directly determined by the user’s eye positions and not a free parameter, and high contrast is an important factor for effective scientific visualization.

That said, the stereo quality of this particular TV turned out to be OK, but not great. It is about on the same level as other LCD 3D TVs with which I’ve worked, about on par with passive-stereo 3D TVs (where cross-talk is caused by imperfect polarization, not pixel switch time), and significantly worse than DLP-based 3D projectors or projection TVs, where pixel switch time is simply not an issue, and the little cross-talk there is is caused by the 3D glasses. In the Nanotech Construction Kit, cross-talk manifested itself as a marble-like texture on the atomic building blocks, which looked strange but actually worked OK. In applications with point- or line-based rendering, white points or lines on a black background had borderline-worrisome cross-talk.

One unexpected issue with the 3D TV was that it refused to accept the 1080p frame-packed HDMI 1.4a video signal that I tried to send it. Since the TV doesn’t support Quincunx (“checkerboard”) interleaved stereo, I had to use a top-to-bottom stereo signal, which caused a loss in resolution, and anisotropy effects when using point or line primitives. Not ideal but workable, and a firmware upgrade might fix it.

Apart from that, there were the usual off-site installation issues, requiring several trips to the hardware or computer store to solve. There’s always a missing cable or power strip, or a small broken widget. What took the cake in this installation was the mouse that shipped with the Dell tracking PC: someone at Dell must have clipped the mouse cable with a set of pliers (see Figure 2); the broken mouse was wrapped in a cut plastic bag inside an undamaged cardbox container together with the keyboard.

Figure 2: Someone at Dell must have had a bad case of the Mondays. At least it was a clean cut.

In the final analysis, it took the first day to assemble the main PC and install Linux and all the goodies on it, and to install the tracking software on the Dell PC. Added difficulty: the tracking software insisted on calling home during installation, but getting a PC on the network at UCSB requires intervention from IT support staff, who had left for the day. So I needed to set up ad-hoc NAT on the main PC, which only had a single network interface card. After an unsuccessful trip to Best Buy to buy a second network card, I found a way to do NAT over a single interface (whew!).

The second day was spent mounting the TV and tracking cameras on Lorraine’s office wall, with help from friendly staff (thanks Tim!), and then aligning the tracking cameras and internally calibrating the system. The last part of that was calibration of the Vrui-side of the system using a Leica TCR407 power Total Station, with help from Lorraine’s grad student Carlye. When I thought I was done for the day was when the real trouble started, from unexpected problems calculating the calibration equations to random X server lock-ups and crashes caused by the Nvidia graphics driver. Fortunately I found the problem with the former, and a work-around for the latter.

The calibration issues were caused by the TV’s stupid default setting where incoming native 1920×1080 HDMI images are zoomed and then resampled to the 1920×1080 display engine. Who ever  thought this was a good idea? Anyway, the result was that we measured the zoomed 2D test pattern, but because 3D mode doesn’t zoom, we got a screen size mismatch.

So we had to re-do the screen part of the calibration on the third day; fortunately, I hadn’t moved or taken down the Total Station yet, so we didn’t have to re-do the other two parts. The rest of the third day was spent cleaning up by moving the computers to their temporary “final” position (see Figure 3), sorting out several odds and ends, installing support scripts and launcher icons, and, most importantly, training the new users (Lorraine and Carlye, see Figure 4).

Figure 3: Picture of the 3D system’s temporary “final” installation with dangling cables and the main and tracking computers jammed into a corner, using an old chair to hold the keyboard, mouse, and monitor. This will be cleaned up in the coming days after a trip to the furniture store to get a proper computer desk.

Figure 4: Lorraine (standing) and Carlye (wearing the tracked 3D glasses and holding the Wiimote input device) playing with the Nanotech Construction Kit during initial user training.

In the final analysis, it took three full days (8:15am-10:15pm, 7:30am – 2:00am, 9:15am – 5:00pm) to build, install, and calibrate the system entirely from scratch and give the users enough initial training to get them going, without any site preparation except having the components already on site. There are a few outstanding issues, such as the Wiimote requiring to push the reset button instead of 1+2 to connect, but those can be solved remotely in the future.

It turned out as a very good system, with OK stereo quality, ample screen space, very good tracking volume and excellent tracking calibration (not more than 1mm discrepancy between tracking marker and virtual indicator over the entire space). The TrackingTools software has improved a lot since the early versions I use in my own system; after moving away from Euler angles to represent orientations it no longer suffers from gimbal lock, and performance and accuracy have increased overall. The software is a lot more automated as well, requiring literally zero interaction after double-clicking the icon to get up and running.

Having run into a lot of new issues, I am now hoping that knowing them ahead of time will make the next off-site installation go smoother and take less time. Here’s hoping.