I have talked about KeckCAVES’ involvement in the Curiosity Mars Rover missions severaltimesbefore, but I just found a set of cool pictures that I have not shared yet. I just saw a redditthread about a VR application to walk on the moon, one commenter asked about doing the same for Mars, and one thing led to another.
Can an application like that be done for Mars? Do we have enough data, and are the data publicly available? The answers are “yes, alreadydone,” “kinda,” and “yes, but,” respectively.
As of my last checking, there are two main sources of topography data for Mars. The older source is an orbital laser range survey done by the Mars Orbiter Laser Altimeter (MOLA). This is essentially a planetary LiDAR scan, and can be visualized using LiDAR Viewer. The two pictures I mention above are these (Figures 1 and 2):
Figure 1: Global visualization of Mars topography using the MOLA data set, rendered using LiDAR Viewer. Vertical scale is 5:1.
Figure 2: Close-up of global Mars topography data set (centered on the canals), showing individual laser returns as grey dots. The scan lines corresponding to individual orbital periods can clearly be identified. Vertical scale is 5:1.
Figure 1: Dawn Sumner, member of the NASA Curiosity Mars rover mission’s science team, interacting with a life-size 3D model of the rover in the UC Davis KeckCAVES holographic display environment. Still image taken from “The surface of Mars.”
The most interesting aspect of this talk, for me, was that the art project and all the software development for it, are done by the “other” part of the KeckCAVES project, the more mathematically/complex systems-aligned cluster around Jim Crutchfield of UC Davis‘ Complexity Sciences Center and his post-docs and graduate students. In practice, this means that I saw some of the software for the first time, and also heard about some problems the developers ran into that I was completely unaware of. This is interesting because it means that the Vrui VR toolkit, on which all this software is based, is maturing from a private pet project to something that’s actually being used by parties who are not directly collaborating with me.
I’ve already mentioned KeckCAVES‘ involvement in NASA‘s newest Mars mission, the Mars Science Laboratory, in a previous post, but now I have an update. Dawn Sumner, UC Davis‘ member of the Curiosity science team, was interviewed last week for “Onward California,” which I guess is some new system-wide outreach and public relations effort to get the public’s mind off last fall’s “unpleasantries.” Just kidding UC, you know I love you.
Anyway… Dawn decided that the best way to talk about her work on Mars would be to do the interview in the CAVE, showing how our software, particularly Crusta Mars, was used during the planning stages of the mission, specifically landing site selection. I then suggested that it would be really nice to do part of the interview about the rover itself, using a life-size and high-resolution 3D model of the rover. So Dawn went to her contacts at the Jet Propulsion Laboratory, and managed to get us a very detailed 3D model, made of several million polygons and high-resolution textures, to load into the CAVE.
What someone posing with a life-size 3D model of the Mars Curiosity rover might look like.
As it so happens, I have a 3D mesh viewer that was able to load and render the model (which came in Alias|Wavefront OBJ format), with some missing features, specifically no specularity and bump mapping. The renderer is fast enough to draw the full, undecimated mesh at sufficient frame rate for immersive display, around 30 frames per second.
The next problem, then, was how to film the beautiful rover model in the CAVE without making it look like garbage, another topic about which I’ve posted before. The film team, from the Department of the 4th Dimension, fortunately was on board, and filmed the interview in several segments, using hand-held and static camera setups.
We have pretty much figured out how to film hand-held video using a secondary head tracker attached to the camera, but static setups where the camera is outside the CAVE, and hence outside the tracking system’s range, always take a lot of trial and error to set up. For good video quality, one has to precisely measure the 3D position of the camera lens relative to the CAVE and then configure that in the CAVE software.
Previously, I used to do that by guesstimating the camera position, entering the values into the configuration file, and then using a Vrui calibration utility to visually judge the setup’s correctness. This involves looking at the image and why it’s wrong, mentally changing the camera position to correct for the wrongness, editing the configuration file, and repeating the whole process until it looks OK. Quite annoying that, especially if there’s an entire film crew sitting in the room checking their watches and rolling their eyes.
After that filming session, I figured that Vrui could use a more interactive way of setting up CAVE filming, a user interface to set up and configure several different filming modes without having to leave a running application. So I added a “filming support” vislet, and to properly test it, filmed myself posing and playing with the Curiosity rover (MSL Design Courtesy NASA/JPL-Caltech):
Pay particular attention to the edges and corners of the CAVE, and how the image of the 3D model and the image backdrop seamlessly span the three visible CAVE screens (left, back, floor). That’s what a properly set up CAVE video is supposed to look like. Also note that I set up the right CAVE wall to be rendered for my own point of view, in stereo, so that I could properly interact with the 3D model and knew what I was pointing at. Without such a split-CAVE setup, it’s very hard to use the CAVE when in filming mode.
The filming support vislet supports head-tracked recording, static recording, split-CAVE recording (where some screens are rendered for the user, and some for the camera), setting up custom light sources, and a draggable calibration grid and input device markers to simplify calibrating a static camera setup when the camera is outside the tracking system’s range and cannot be measured directly.
All in all, it works quite well, and is a significant improvement over the previous setup method. It is now possible to change filming modes and camera setups from within a running application, without having to exit, edit configuration files, and restart.
You might have heard that NASA has a new rover on Mars. What you might not know is that KeckCAVES is quite involved with that mission. One of KeckCAVES’ core scientists, Dawn Sumner, is a member of the Curiosity Science Team. Dawn talks about her experiences as tactical long term planner for the rover’s science mission, and co-investigator on several of the rover’s cameras, on her blog, Dawn on Mars.
Immersive 3D visualization has been used at several stages of mission planning and preparation, including selection of the rover’s landing site. Crusta, the virtual globe software developed by KeckCAVES, was used to create a high-resolution global topography model of Mars, merging the best-quality data available for the entire planet and each of the originally proposed landing sites. Crusta’s ability to run in an immersive 3D display environment such as KeckCAVES’ CAVE, allowing users to virtually walk on the surface of Mars at 1:1 (or any other) scale, and to create maps by drawing directly on the 3D surface, was important in weighing the relative merits of the four proposed sites from engineering and scientific viewpoints.
Dawn made the following video after Gale Crater, her preferred landing site, had been selected for the mission to illustrate her rationale. The video is stereoscopic and can be viewed using red/blue anaglyphic glasses or several other stereo viewing methods:
We filmed this video entirely virtually. Dawn is working with Crusta on a low-cost immersive 3D environment based on a 3D TV, which means she perceived Crusta’s Mars model as a tangible 3D object and was able to interact with it via natural gestures using an optically-tracked Nintendo Wii controller as input device, and point out features of interest on the surface using her fingers. Dawn herself was filmed by two Kinect 3D video cameras, and the combination of virtual Mars and virtual Dawn was rendered into a stereo movie file in real-time while she was working with the software.
Now that Curiosity is on Mars, we are planning to continue using Crusta to visualize and evaluate its progress, and we hope that Crusta will soon help planning and executing the rover’s journey up Mt. Sharp (NASA have their own 3D path planning software, but we believe Crusta has useful complementary features).
Furthermore, as the rover progresses, it will send high-resolution stereo images from its mast-mounted navigation camera. Several KeckCAVES developers are working on software to convert these stereo images into ultra-high resolution digital terrain models, and to register these to, and integrate them with, Crusta’s existing Mars topography model as they become available.
We already tried this process with stereo imagery from the previous two Mars rovers, Spirit and Opportunity. We took the highest-resolution orbital topography data available, collected by the HiRISE camera, and merged it with the rover data, which is approximately 1000 times more dense. The following figure shows the result (click to embiggen):
The white arrow in panel A shows the location of the rover’s high-resolution data patch shown in panels B and C. In panel C, a stratum of rock — identified by its different color — was selected, and a plane was fit to the selected points (highlighted in green) to measure the stratum’s bedding angle.
The above images were created with LiDAR Viewer, another KeckCAVES software package. LiDAR Viewer is used to visually analyze very large 3D point clouds, such as those resulting from laser scanning surveys, or, in this case, orbital and terrestrial stereo imagery.
The terrain data we expect from Curiosity’s stereo cameras will be even higher resolution than that. The end result will be an integrated global Martian topography model with local patches down to millimeter resolution, allowing a scientist in the CAVE to virtually pick up individual pebbles.