Picture Gallery

Just a collection of pictures showing various immersive graphics applications running in various immersive or semi-immersive display environments, or really anything related to 3D computer graphics work or research.

New commercial-grade AR Sandbox model at the White House Water Summit, 03/22/2016. Left to right: Dr. Louise H. Kellogg, Neysa Call, Dr. S. Geoffrey Schladow, Dr. Oliver Kreylos. Photo credit: Terry Davies, National Science Foundation.

New commercial-grade AR Sandbox model at the White House Water Summit, 03/22/2016. Left to right: Dr. Louise H. Kellogg, Neysa Call (Legislative Aide and Grants Director for Senate Democratic leader Harry Reid), Dr. S. Geoffrey Schladow, Dr. Oliver Kreylos. Photo credit: Terry Davies, National Science Foundation.

High-fiving my own holographic self in the UC Davis KeckCAVES.

High-fiving my own holographic self in the UC Davis KeckCAVES.

I’ve been messing around with corner finding / grid reconstruction algorithms to get the depth and color streams on my Kinect V2s aligned. This is my current one; there’s nothing special about this picture.

This one’s the surprising one. The corner finder algorithm finds corners even when the printed grid is turned the other way.

I had myself scanned by a Kinect v2 using KinectFusion software at the 2015 Microsoft Build conference. This was done using a single camera and an automated turntable on which I had to stand still.

Photo of me using a laser range finder / theodolite (a Leica TotalStation) to align the coordinate system reported by the optical tracking system (a NaturalPoint OptiTrack system running the Motive software) to the physical coordinate system of the room, and the exact position, orientation, and size of the projection image.

The newest addition to UC Davis’s growing suite of VR labs, the Modlab in the Digital Humanities department is a mixed-use VR environment with a 14’x9′ front-projected head-tracked stereoscopic screen and one or more head-mounted displays, integrated into the same shared virtual space via a room-spanning optical tracking system.

Color-mapped facade reconstructed from Kinect v2. Rough-and-tumble manual calibration.

3D geometry captured by first-generation Kinect (Kinect for Xbox 360).

3D geometry captured by second-generation Kinect (Kinect for Xbox One).

View of Oculus Rift DK2’s Tracking LEDs, as seen by the tracking camera and a blob identification/extraction algorithm, the latter being responsible for the green blobs in the otherwise greyscale image.
I somehow like this picture, dunno why. It looks… sinister. Like I’m some Spider-Man villain or something.

A vislet to adjust viewer configuration, such as eye relief and inter-pupillary distance, from within a running VR application. Shown here in an Oculus Rift Development Kit 2.

Glamour shot of user looking at a high-resolution LiDAR scan of the southern San Francisco Bay shore, as part of a flood risk study.

A user fitting a ground plane to an aerial LiDAR scan of the central Californian Cosumnes River valley in a CAVE.

A user building a C-60 Buckminsterfullerene molecule in a CAVE using the Nanotech Construction Kit.

A user interacting with a 3D protein model on a non-stereoscopic tiled display wall consisting of 2×2 tiles of 1280×1024 pixels each. The display wall has two tracked data gloves as input devices.

A local user and two remote users, represented as “holographic” 3D video, interacting in a CAVE.

An Augmented Reality sandbox that overlays a topographic map over a manipulable sand surface, including a realistic water simulation.

Two users analyzing a computed tomogram (CT) of a head with a tripod fracture on a low-cost immersive display consisting of a 72″ 3D TV, an optical tracking system, and a Nintendo Wii controller as hand-held input device.

A user measuring and annotating a computed tomogram (CT) of a head with a tripod fracture on a low-cost immersive display system consisting of a 72″ 3D TV, an optical tracking system, and a Nintendo Wii controller as hand-held input device.

A high-resolution aerial photograph shown on a tiled stereoscopic display wall consisting of 3×2 tiles of 1024×768 pixels each, with two active stereo rear projectors per tile. The display wall has head tracking and one hand-held input device.

A user walking through a Quake ||| Arena map (q3dm4) in a CAVE and holding a virtual lightsaber attached to the hand-held input device.

A user interacting with the “Virtual Jell-O” application in the KeckCAVES CAVE. The user just grabbed the top-right corner of the blob with the hand-held input device, and is just about to throw it against the wall.

Please leave a reply!