What’s all this, then?
I intend to use this blog as a platform to talk about, and engage in discussion about, all things related to immersive computer graphics (what some might call “virtual reality,” but that’s a topic for another post), primarily from a developer’s rather than a user’s perspective.
Concretely, this means I will talk about insights gained or problems encountered while writing software, comment on things others have said, discuss my own opinions on how to do things the “right way” or the “wrong way,” post updates on software development, comment on new VR hardware as I get my hands on it, etc.
Primarily, because someone suggested I should do it, and made a convincing argument. But also because I have learned from posting videos to YouTube that being able to interact with viewers/readers via comments is useful, and my current static web page doesn’t support that, and I don’t want to make videos about things that are more easily talked about in text.
Why do I think I’m qualified to write about this?
I’ve been writing 3D graphics software since about 1985, including raytracers, renderers, and geometry modelers. Not long after, I developed my first anaglyphic renderer (which didn’t work so well, but that’s a topic for another post), and I wrote a renderer for single-image stereograms (aka “Magic Eye” images) when I went to university and started getting serious. Entering university was, incidentally, when I realized that I was utterly wrong when I thought that I already was an awesome programmer before (but that’s a topic for another post).
After getting my Master’s degree (with a focus on computer graphics, or rather computational geometry), I came to UC Davis to get a PhD. This is when I started getting into “real” VR: coincidentally, the UC Davis graphics group had just taken delivery of an early immersive display environment (IE), a so-called “responsive workbench,” long since defunct. I started developing software for it because I thought it was the coolest thing ever, and because nobody else was doing anything with it — probably because there was absolutely zero software to drive the thing. My official research focus was scientific visualization, but from the beginning, I put a “VR spin” on any programming projects, and did further VR application development during the several summers I worked at Lawrence Berkeley Lab‘s visualization group.
Out of my frustration with existing VR development toolkits, particularly the cavelib toolkit I was forced to use at LBL, I started developing my own toolkit and called it “Virtual Reality User Interface,” or Vrui for short, initially as a higher-level library on top of cavelib. After a while I realized Vrui could be more efficient and portable without cavelib underneath, so I kicked cavelib to the curb and never looked back.
In late 2003, I was approached by several researchers from the UC Davis department of geology, who had gotten the crazy idea to apply for grant money to build a CAVE IE for their scientific use. We immediately started working together, got the money for the CAVE, and installed it in spring 2005, thereby forming the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). Ever since, I have been working very closely with these researchers to develop custom visualization software to turn the CAVE into a scientific instrument. Instead of just using the CAVE to present the final results of scientific work by creating shiny “visualizations” (a word I don’t like to use, but that’s a topic for another post), our researchers use the CAVE throughout their scientific workflows to process data from more raw to more refined forms by becoming a part of the machine, so to speak. Here is an old video showing basic 3D interactions in the CAVE:
In 2007, KeckCAVES even branched out into performing arts, by providing interactive 3D visualization capabilities for a modern dance performance (“COLLAPSE – suddenly falling down” by Della Davidson et al.). This was only possible because our scientific visualization software was flexible enough to be used in a theater setting, and to allow dancers to control the 3D imagery in real time.
In 2008, I saw an early prototype of an IE based on a commodity 3D TV at a conference, and started working on that immediately. Fortunately, the Vrui toolkit was flexible enough to run on these IEs very efficiently, without any changes. Since then, KeckCAVES has branched out into commodity low-cost VR, by providing software and blueprints allowing others to build their own IEs for little money. Here’s a video showing 3D data visualization on a low-cost VR system:
Around the same time, I became interested in remote collaboration, i.e., the idea to connect spatially distributed IEs such that users in them can work together as if they were in the same place. This required two major components: a network infrastructure to connect and synchronize independent IEs, and a 3D video component that creates real-time “holographic” representations of remote users.
Early such 3D video systems were rather expensive, finicky, and decidedly low-res (see this old video, for example), but things changed in late 2010 with the arrival of the Microsoft Kinect game controller, which is actually a rather sophisticated 3D camera (what exactly I mean by “3D camera” is a topic for another post). I immediately got to work on a Kinect driver, and, with help from others, was able to turn the Kinect into a reliable 3D video capture device to be used in remote collaboration. Here is the video that made me Internet-famous for 15 minutes:
Here’s a good video showing remote collaboration between a CAVE and a low-cost environment based on a 3D TV:
In early 2012 I started playing with the other end of the VR continuum by building an Augmented Reality sandbox. It’s not an original idea, but I think it’s executed pretty well:
Starting around mid-2012, and (not coincidentally in the least) coinciding with me starting this blog, VR is in the middle of another push into the mainstream. Driven by the Oculus Rift consumer head-mounted display, a cottage industry of VR hardware and software development has suddenly sprung up, mostly from grass-roots efforts and coordinated by social media. In order to stay ahead of the wave (or at least not get pulled under), I have added native support for the Oculus Rift HMD and its built-in inertial tracker to version 3.0 of the Vrui toolkit:
Curiously, throughout all this, I have never been a member of the VR research community proper, in the sense that I have never published papers in VR journals, don’t really keep up with them, only know few other VR researchers personally, and don’t go to VR conferences. I have since found out that my thoughts about VR, and my ultimate goals behind using it, are somewhat at odds with the community (but that’s a topic for another post). In a sense, this is one major reason why I’m writing this blog.
So, am I qualified to write this blog? Let me know below.