Idle Hands etc. etc.

A friendly redditor sent me this link to a popular post on /r/funny yesterday (see Figure 1 for the picture). I might have mentioned before how it was that exact scene in the original Star Wars movie that got me into 3D computer graphics and later VR, so it got me thinking how that particular shot would have looked like if the miniature ILM used to film the trench run scene had not been flat, but exhibited the proper scaled curvature of the Death Star.

Figure 1: Death Star and trench from attack scene in A New Hope, showing the flat miniature that was used to shoot the scene. Source.

Two hours and 153 lines of code later, here are a couple of images which are hopefully true to scale. I used 160km as the Death Star’s diameter, based on its Wookiepedia entry (Wikipedia states 120km, but I’m siding with the bigger nerds here), and I assumed the meridian trench’s width and depth to be 50m, based on the size of an X-Wing fighter and shot compositions from the movie.

Side note: I don’t know how common this misconception is, but the trench featured in the trench run scenes is not the equatorial trench prominently visible in Figure 1. That one holds massive hangars (as seen in the scene where the Millennium Falcon is tractor-beamed into the Death Star) and is vastly larger than the actual trench, with is a meridian (north-south facing) trench on the Death Star’s northern hemisphere, as clearly visible on-screen during the pre-attack briefing (but then, who ever pays attention in briefings).

The images in Figures 2-6 are 3840×2160 pixels. Right-click and select “View Image” to see them at full size.

Figure 2: Meridian trench on spherical Death Star, approx. 12.5m above trench floor.Horizon distance: 1.4km.
Figure 3: Meridian trench on spherical Death Star, approx. 25m above trench floor.Horizon distance: 2km.
Figure 4: Meridian trench on spherical Death Star, approx. 37.5m above trench floor.Horizon distance: 2.5km.
Figure 5: Meridian trench on spherical Death Star, precisely at Death Star’s surface.Horizon distance: 2.8km.
Figure 6: Meridian trench on spherical Death Star, approx. 100m above Death Star’s surface.Horizon distance: 4.9km.

As can be seen from Figures 2-6, the difference between the flat miniature used in the movie, and the spherical model I used, is relatively minor, but noticeable — ignoring the glaring lack of greebles in my model, obviously. I noticed the lack of curvature for the first time while re-watching A New Hope when the prequels came out, but can’t say I ever cared. Still, this was a good opportunity for some recreational coding.

Quantitative Comparison of VR Headset Fields of View

Although I’ve taken many through-the-lens pictures of several common VR headsets with a calibrated wide-angle camera, until recently I was still struggling how to compare the resulting fields of view (FoV) quantitatively, how to put them in context, and how to visualize them appropriately. When trying to answer questions like “which VR headset has a bigger FoV?” or “by how much is headset A’s FoV bigger than headset B’s?” or “how does headset C’s FoV compare to the field of vision of the naked human eye?”, the basic question is: how does one even measure field of view, in a way that is fair and allows comparison across a wide range of possible sizes. Does one report a single angle, and how does one measure it? Across the “diagonal” of the field of view? What if the field of view is not a rectangle? Does one report a pair of angles, say horizontal⨉vertical? Again, what if the field of view is not a rectangle?

Then, if FoV is measured either as a single angle or a pair of angles, how does one compare different FoVs fairly? If one headset has 100° FoV, and another has 110°, does the latter show 10% more of a virtual 3D environment? What if one has 100°⨉100° and another has 110°⨉110°, does the latter show 21% more?

To find a reasonable answer, let’s go back to the basics: what does FoV actually measure? The general idea is that FoV measures how much of a virtual 3D environment a user can see at any given instant, meaning, without moving their head. A larger FoV value should mean that a user can see more, and, ideally, an FoV value that is twice as large should mean that a user can see twice as much.

Now, what does it mean that “something can be seen?” We can see something if light from that something reaches our eye, enters the eye through the cornea, pupil, and lens, and finally hits the retina. In principle, light travels towards our eyes from all possible directions, but only some of those directions end up on the retina due to various obstructions (we can’t see behind our heads, for example). So a reasonable measure of field of view (for one eye) would be the total number of different 3D directions from which light reaches that eye’s retina. The problem is that there is an infinite number of different directions from which light can arrive, so simple counting does not work.

Continue reading

KeckCAVES On Mars, pt. Oh-I-lost-count

Last weekend, we had yet another professional film crew visiting us to shoot video about our involvement in NASA’s still on-going Mars Science Laboratory (MSL, aka Curiosity rover) mission. This time, they were here to film parts of an upcoming 90-minute special about Mars exploration for the National Geographic TV channel. Like last time, the “star” of the show was Dawn Sumner, faculty in the UC Davis Department of Earth and Planetary Sciences, one of the founding members of KeckCAVES, and member of the MSL science team.

Unlike last time, we did not film in the KeckCAVES facility itself (due to the demise of our CAVE), but in the UC Davis ModLab. ModLab is part of an entirely different unit — UC Davis’s Digital Humanities initiative — but we are working closely with them on VR development, they have a nice VR environment consisting of two HTC Vive headsets and a large 4.2m x 2.4m screen with a ceiling-mounted ultra-short throw projector (see Figure 1), their VR hardware is running our VR software, and they were kind enough to let us use their space.

Figure 1: Preparation for filming in UC Davis’s ModLab, showing its 4.2m x 2.4m front-projected screen and ceiling-mounted ultra-short throw projector, and two Lighthouse base stations.

The fundamental idea here was to use several 3D models, created or reconstructed from real data sent back either by satellites orbiting Mars or by the Curiosity rover itself, as backdrops to let Dawn talk about the goals and results of the MSL mission, and her personal involvement in it. Figure 1 shows a backdrop in the real sense of the word, i.e., a 2D picture (a photo taken by Curiosity’s mast camera) with someone standing in front of it, but that was not the idea here (we didn’t end up using that photo). Instead, Dawn was talking while wearing a VR headset and interacting with the 3D models she was talking about, with a secondary view of the virtual world, from the point of view of the film camera, shown on the big screen behind her. More on that later. Continue reading

VR medical visualization with 3D Visualizer

Now that Vrui is working on the HTC Vive (at least until the next SteamVR update breaks ABI again), I can finally go back and give Vrui-based applications some tender loving care. First up is 3D Visualizer, an application to visualize and, more importantly, visually analyze three-dimensional volumetric data sets (see Figure 1).

Figure 1: Analyzing a CAT scan with 3D Visualizer on the HTC Vive. Cat included.

Continue reading

Archaeologists use LiDAR to find lost cities in Honduras

I wasn’t able to talk about this before, but now I guess the cat’s out of the bag. About two years ago, we helped a team of archaeologists and filmmakers to visualize a very large high-resolution aerial LiDAR scan of a chunk of dense Honduran rain forest in the CAVE. Early analyses of the scan had found evidence of ruins hidden under the foliage, and using LiDAR Viewer in the CAVE, we were able to get a closer look. The team recently mounted an expedition, and found untouched remains of not one, but two lost cities in the jungle. Read more about it at National Geographic and The Guardian. I want to say something cool and Indiana Jones-like right now, but I won’t.

Figure 1: A “were-jaguar” effigy, likely representing a combination of a human and spirit animal, is part of a still-buried ceremonial seat, or metate, one of many artifacts discovered in a cache in ruins deep in the Honduran jungle.
Photograph by Dave Yoder, National Geographic. Full-resolution image at National Geographic.

Continue reading

I Can’t Ever Get Over This Mars Thing, Can I?

I have talked about KeckCAVES’ involvement in the Curiosity Mars Rover missions several times before, but I just found a set of cool pictures that I have not shared yet. I just saw a reddit thread about a VR application to walk on the moon, one commenter asked about doing the same for Mars, and one thing led to another.

Can an application like that be done for Mars? Do we have enough data, and are the data publicly available? The answers are “yes, already done,” “kinda,” and “yes, but,” respectively.

As of my last checking, there are two main sources of topography data for Mars. The older source is an orbital laser range survey done by the Mars Orbiter Laser Altimeter (MOLA). This is essentially a planetary LiDAR scan, and can be visualized using LiDAR Viewer. The two pictures I mention above are these (Figures 1 and 2):

Figure 1: Global visualization of Mars topography using the MOLA data set, rendered using LiDAR Viewer. Vertical scale is 5:1.

Figure 2: Close-up of global Mars topography data set (centered on the canals), showing individual laser returns as grey dots. The scan lines corresponding to individual orbital periods can clearly be identified. Vertical scale is 5:1.

Continue reading

More on Desktop Embedding via VNC

I started regretting uploading my “Embedding 2D Desktops into VR” video, and the post describing it, pretty much right after I did it, because there was such an obvious thing to do, and I didn’t think of it.

Figure 1: Screenshot from video showing VR ProtoShop run simultaneously in a 3D environment created by an Oculus Rift and a Razer Hydra, and in a 2D environment using mouse and keyboard, brought into the 3D environment via the VNC remote desktop protocol.

Continue reading

2D Desktop Embedding via VNC

There have been several discussions on the Oculus subreddit recently about how to integrate 2D desktops or 2D applications with 3D VR environments; for example, how to check your Facebook status while playing a game in the Oculus Rift without having to take off the headset.

This is just one aspect of the larger issue of integrating 2D and 3D applications, and it reminded me that it was about time to revive the old VR VNC client that Ed Puckett, an external contractor, had developed for the CAVE a long time ago. There have been several important changes in Vrui since the VNC client was written, especially in how Vrui handles text input, which means that a completely rewritten client could use the new Vrui APIs instead of having to implement everything ad-hoc.

Here is a video showing the new VNC client in action, embedded into LiDAR Viewer and displayed in a desktop VR environment using an Oculus Rift HMD, mouse and keyboard, and a Razer Hydra 6-DOF input device:

Continue reading

KeckCAVES in the News

A cluster of earthquakes always gets the news media interested in geology, at least for a short time, and Monday’s 4.4 in southern California, following last week’s series of north coast quakes up to 6.9, was no different. Our local media’s go-to guy for earthquakes and other natural hazards is Dr. Gerald Bawden of the USGS Sacramento. Gerald also happens to be one of the main users of the KeckCAVES visualization facility and KeckCAVES software, and so he took an interview with our local Fox-affiliate in the CAVE, “to get out of the wind,” as he put it.

Here’s the video. Caution: ads after the jump.

Continue reading

An Early Easter Egg

I always love it when an image I made, or a photograph I took, pops up in an unexpected context. I have recently been drafted for a campus committee, the Campus Council for Information Technology, and the guest speaker in today’s meeting, Patrice Koehl, did a presentation on Big Data, Data Analytics, the need for data centers, the lack of collaboration between computer scientists and other scientists, etc. Among his slides was this one (apologies for the lousy quality; I took the picture with my laptop’s built-in camera):

Figure 1: Snapshot from today’s CCFIT meeting, showing Patrice Koehl, one of his slides, and one of my pictures on one of his slides (indicated by red frame).

Continue reading