About okreylos

I am a research computer scientist at the University of California, Davis. My research areas are scientific visualization, particularly in immersive ("virtual reality") environments, human/computer interaction in immersive environments, and 3D computer graphics. My primary work is software development, from architecture over design to implementation and coding. I am the primary software developer for the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). Some of my released packages are Vrui (a VR development toolkit), CollaborationInfrastructure (a tele-collaboration plug-in for Vrui), Kinect (a driver suite to capture 3D video from Microsoft Kinect cameras), LiDAR Viewer (a visualization package for very large 3D point clouds), 3D Visualizer (a system for interactive visual analysis of 3D volumetric data), Nanotech Construction Kit (an interactive molecular design program), and SARndbox (an augmented reality sandbox). I also dabble in VR hardware, in the sense that I take existing custom or commodity hardware components (3D TVs, head-mounted displays, projectors, tracking systems, Wiimotes, Kinect cameras, ...) and build fully integrated immersive environments out of them. This includes a fair share of driver development to support hardware that either doesn't have drivers, or whose vendor-supplied drivers are not up to par.

HoloLens and Field of View in Augmented Reality

Since Microsoft’s Build 2015 conference, and increasingly since Microsoft’s showing at E3, everybody (including me) has been talking about HoloLens, and its limited field of view (FoV) has been a contentious topic. The main points being argued (fought) about are:

  1. What exactly is the HoloLens’ FoV?
  2. Why is it as big (or small) as it is, and will it improve for the released product?
  3. How does the size of the FoV affect the HoloLens’ usability and effectiveness?
  4. Were Microsoft’s released videos and live footage of stage demos misleading?
  5. How can one visualize the HoloLens’ FoV in order to give people who have not tried it an idea what it’s like?

Measuring Field of View

Initially, there was little agreement among those who experienced HoloLens regarding its field of view. That’s probably due to two reasons: one, it’s actually quite difficult to measure the FoV of a headmounted display; and two, nobody was allowed to bring any tools or devices into the demonstration rooms. In principle, to measure see-through FoV, one has to hold some object, say a ruler, at a known distance from one’s eyes, and then mark down where the apparent left and right edges of the display area fall on the object. Knowing the distance X between the left/right markers and the distance Y between the eyes and the object, FoV is calculated via simple trigonometry: FoV = 2×tan-1(X / (Y×2)) (see Figure 1).

Figure 1: Calculating field of view by measuring the horizontal extent of the apparent screen area at a known distance from the eyes. (In this diagram, FoV is 2×tan-1(6″ / (6″×2)) = 53.13°.)

Continue reading

On the road for VR: Redwood City, California

Last Friday I made a trek down to the San Francisco peninsula, to visit and chat with a couple of other VR folks: Cyberith, SVVR, and AltspaceVR. In the process, I also had the chance to try a couple of VR devices I hadn’t seen before.

Cyberith Virtualizer

Virtual locomotion, and its nasty side effect, simulator sickness, are a pretty persistent problem and timely topic with the arrival of consumer VR just around the corner. Many enthusiasts want to use VR to explore large virtual worlds, as in taking a stroll through the frozen tundra of Skyrim or the irradiated wasteland of Fallout, but as it turns out, that’s one of the hardest things to do right in VR.

Figure 1: Cyberith Virtualizer, driven by an experienced user (Tuncay Cakmak). Yes, you can jump and run, with some practice.

Continue reading

On the Road for VR: Augmented World Expo 2015, Part I: VR

I attended the Augmented World Expo (AWE) once before, in 2013 when I took along an Augmented Reality Sandbox. This time, AWE partnered with UploadVR to include a significant VR subsection. I’m going to split my coverage, focusing on that VR component here, while covering the AR offering in another post.

eMagin 2k×2k VR HMD

eMagin’s (yet to be named) new head-mounted display was the primary reason I went to AWE in the first place. I had seen it announced here and there, but I was skeptical it would be able to provide the advertised field of view of 80°×80°. Unlike Oculus Rift, HTC/Valve Vive, or other post-renaissance HMDs, eMagin’s is based on OLED  microdisplays (unsurprisingly, with microdisplay manufacture being eMagin’s core business). Previous microdisplay-based HMDs, including eMagin’s own Z800 3DVisor, were very limited in the FoV department, usually topping out around 40°. Magnifying a display that measures around 1cm2 to a large solid angle requires much more complex optics than doing the same for a screen that’s several inches across.

Figure 1: eMagin’s unnamed 2k x 2k, 80×80 degree FoV, VR HMD with flip-up optics.

Continue reading

On the road for VR: Silicon Valley Virtual Reality Conference & Expo

Yesterday, I attended the second annual Silicon Valley Virtual Reality Conference & Expo in San Jose’s convention center. This year’s event was more than three times bigger than last year’s, with around 1,400 attendees and a large number of exhibitors.

Unfortunately, I did not have as much time as I would have liked to visit and try all the exhibits. There was a printing problem at the registration desk in the morning, and as a result the keynote and first panel were pushed back by 45 minutes, overlapping the expo time; additionally, I had to spend some time preparing for and participating in my own panel on “VR Input” from 3pm-4pm.

The panel was great: we had Richard Marks from Sony (Playstation Move, Project Morpheus), Danny Woodall from Sixense (STEM), Yasser Malaika from Valve (HTC Vive, Lighthouse), Tristan Dai from Noitom (Perception Neuron), and Jason Jerald as moderator. There was lively discussion of questions posed by Jason and the audience. Here’s a recording of the entire panel:

One correction: when I said I had been following Tactical Haptics‘ progress for 2.5 years, I meant to say 1.5 years, since the first SVVR meet-up I attended. Brainfart. Continue reading

On the road for VR: Microsoft HoloLens at Build 2015, San Francisco

I have briefly mentioned HoloLens, Microsoft’s upcoming see-through Augmented Reality headset, in a previous post, but today I got the chance to try it for myself at Microsoft’s “Build 2015” developers’ conference. Before we get into the nitty-gritty, a disclosure: Microsoft invited me to attend Build 2015, meaning they waived my registration fee, and they gave me, like all other attendees, a free HP Spectre x360 notebook (from which I’m typing right now because my vintage 2008 MacBook Pro finally kicked the bucket). On the downside, I had to take Amtrak and Bart to downtown San Francisco twice, because I wasn’t able to get a one-on-one demo slot on the first day, and got today’s 10am slot after some finagling and calling in of favors. I guess that makes us even. 😛

So, on to the big question: is HoloLens real? Given Microsoft’s track record with product announcements (see 2009’s Project Natal trailer and especially the infamous Milo “demo”), there was some well-deserved skepticism regarding the HoloLens teaser released in January, and even the on-stage demo that was part of the Build 2015 keynote:

The short answer is: yes, it’s real, but… Continue reading

Archaeologists use LiDAR to find lost cities in Honduras

I wasn’t able to talk about this before, but now I guess the cat’s out of the bag. About two years ago, we helped a team of archaeologists and filmmakers to visualize a very large high-resolution aerial LiDAR scan of a chunk of dense Honduran rain forest in the CAVE. Early analyses of the scan had found evidence of ruins hidden under the foliage, and using LiDAR Viewer in the CAVE, we were able to get a closer look. The team recently mounted an expedition, and found untouched remains of not one, but two lost cities in the jungle. Read more about it at National Geographic and The Guardian. I want to say something cool and Indiana Jones-like right now, but I won’t.

Figure 1: A “were-jaguar” effigy, likely representing a combination of a human and spirit animal, is part of a still-buried ceremonial seat, or metate, one of many artifacts discovered in a cache in ruins deep in the Honduran jungle.
Photograph by Dave Yoder, National Geographic. Full-resolution image at National Geographic.

Continue reading

The effectiveness of minimalist avatars

I was reminded today of a recent thread on the Oculus subreddit, where a redditor relayed his odd experience remotely viewing his father driving a simulated racecar:

“I decided to spectate a race he was in. I then discovered I could watch him race from his passenger seat. in VR. in real time. I can’t even begin to explain the emotions i was feeling sitting in his car, in game, watching him race. I was in the car with him. … I looked over to ‘him’ and could see all his steering movements, exactly what he was doing. I pictured his intense face as he was pushing for 1st.”

I don’t know if this effect has a name, or even needs one, but it parallels something we’ve observed through our work with Immersive 3D Telepresence:
Continue reading

What is holographic, and what isn’t?

Microsoft just announced HoloLens, which “brings high-definition holograms to life in your world.” A little while ago, Google invested heavily in Magic Leap, who, in their own words, “bring magic back into the world.” A bit longer ago, CastAR promised “a magical experience of a 3D, holographic world.” Earlier than that, zSpace started selling displays they used to call “virtual holographic 3D.” Then there is the current trailblazer in mainstream virtual reality, the Oculus Rift, and other, older, VR systems such as CAVEs.

Figure 1: A real person next to two “holograms,” in a CAVE holographic display.

While these things are quite different from a technical point of view, from a user’s point of view, they have a large number of things in common. Wouldn’t it be nice to have a short, handy term that covers them all, has a well-matching connotation in the minds of the “person on the street,” and distinguishes these things from other things that might be similar technically, but have a very different user experience?

How about the term “holographic?” Continue reading

Messing around with 3D video

We had a couple of visitors from Intel this morning, who wanted to see how we use the CAVE to visualize and analyze Big Datatm. But I also wanted to show them some aspects of our 3D video / remote collaboration / tele-presence work, and since I had just recently implemented a new multi-camera calibration procedure for depth cameras (more on that in a future post), and the alignment between the three Kinects in the IDAV VR lab’s capture space is now better than it has ever been (including my previous 3D Video Capture With Three Kinects video), I figured I’d try something I hadn”t done before, namely remotely interacting with myself (see Figure 1).

Figure 1: How to properly pat yourself on the back using time-delayed 3D video.

Continue reading

Hacking the Oculus Rift DK2, part IV

Link

Note: This is part 4 of a four-part series. [Part 1] [Part 2] [Part 3]

I caved and uploaded a snapshot of the current optical tracking sources, including a pre-release snapshot of upcoming Vrui-3.2-001 (please don’t use it outside of the tracking project; it’s bound to change some before it’s really released), to github: http://github.com/Doc-Ok/OpticalTracking.