AltspaceVR Shutting Down

AltspaceVR, the popular virtual reality social platform, and the eponymous company behind it, will be closing their respective doors on August 3rd. This is surprising, as AltspaceVR has been around since 2013, was well-funded, had a good amount of users given VR’s still-niche status, and had apparently more funding lined up to continue operation and development of their platform (that funding falling through was, according to the announcement linked above, the primary reason for the impending shut-down).

But besides the direct impact on commercial VR as a whole, and the bad omen of a major player closing down, this is also personal to me. Not as a user of AltspaceVR’s service — I have to admit I’ve only tried it for minutes at a time at trade shows or conferences — but as someone who was, albeit tangentially, involved with the company and the people working there.

After having given a presentation at an early SVVR meet-up, I invited SVVR’s founder, Karl Krantz, to visit me at my VR lab at UC Davis. He made the trip a short while later, and brought a few friends, including “Cymatic” Bruce Wooden, Eric Romo, and Gavan Wilhite. I showed them our array of VR hardware, the general VR work we were doing, and specifically our work in VR tele-presence and remote collaboration. According to the people involved, AltspaceVR was founded during the drive back to the Bay Area.

In addition, I co-advised one of AltspaceVR’s developers when he was a PhD student at UC Davis, and I visited them in the summer of 2015 to give a talk about input device and interaction abstraction in multi-platform VR development. During that visit, Eric Romo also gave me my first taste of the newly-released HTC Vive Development Kit (Vive DK1).

For all that, I am sad to see them go under, and I wish everybody who is currently working there all the best for their future endeavors.

Possibly related to this, another piece of news surfaced today: AltspaceVR was named defendant in a patent infringement lawsuit filed by Virtual Immersion Technologies, LLC, regarding this 2002 patent. I do not know whether this filing was a cause in AltspaceVR’s closing, but it is possible that the prospect of a costly court case, or stiff licensing fees, led to some investors getting cold feet.

Either way, this patent deserves closer scrutiny as it is quite broad, and has recently changed ownership from the original inventors to the plaintiff, who has so far been using it exclusively to sue VR companies for infringement. The fact that it specifically claims the use of video to represent performers or users in a shared virtual space might mean that it covers platforms such as our tele-collaboration framework, which would be unfortunate. I have a hunch that this patent, due to its arguably broad applicability, will be the subject of a major legal battle in the near future, and while there is a lot of prior art in multiplayer/multi-user VR, that video component means I cannot dismiss the patent out of hand.

VR medical visualization with 3D Visualizer

Now that Vrui is working on the HTC Vive (at least until the next SteamVR update breaks ABI again), I can finally go back and give Vrui-based applications some tender loving care. First up is 3D Visualizer, an application to visualize and, more importantly, visually analyze three-dimensional volumetric data sets (see Figure 1).

Figure 1: Analyzing a CAT scan with 3D Visualizer on the HTC Vive. Cat included.

Continue reading

Keeping VR users from hurting themselves

Just the other day, I jumped on the wayback machine and posted an article about our work in immersive tele-collaboration, featuring research (and a video) from about four years ago. The shame! I figured it would be excusable that one time, and I would never do it again. Oh well, here we go.

Keeping VR users from hurting themselves

… or their expensive VR equipment.

It’s a pretty big deal. Virtual Reality, especially its head-mounted implementation, is quite good at overriding its users’ sense of place and space. “Presence,” or the feeling of bodily being in a place where one knows to be not, is a powerful and compelling experience, but it has a downside: users experiencing it lose touch with their real physical environments. Exhibit A: Figure 1 (granted, there are some concerns that the following video clip was staged, but let’s pretend it’s for reals).

Figure 1: When instinct takes over. Source: imgur

To prevent this kind of thing from happening — at least in most cases — Valve implemented a system called “Chaperone” into the SteamVR run-time framework that runs their and HTC’s Vive VR headset (and potentially other headsets, through Valve’s OpenVR layer). Continue reading

Remote Collaborative Immersive Visualization

I spent the last couple of days at the first annual meeting of “The Higher Education Campus Alliance for Advanced Visualization” (THE CAAV), where folks managing or affiliated with advanced visualization centers such as KeckCAVES came together to share their experiences. During the talks, I saw slides showing Vrui‘s Collaboration Infrastructure pop up here and there, and generally remote collaboration was a big topic of discussion. During breaks, I showed several people the following video on my smartphone (yes, I finally joined the 21st century), and afterwards realized that I had never written a post about this work, as most of it predates this blog. So here we go.

Continue reading

On the road for VR: The White House, Washington DC

Through a complex chain of circumstances, we got ourselves invited to demonstrate the Augmented Reality Sandbox at the White House Water Summit on March 22, coinciding with the United Nations’ World Water Day 2016, as part of the National Science Foundation‘s presence (NSF funded initial development of the AR Sandbox through an Informal Science Education grant).

Figure 1: Mark I standard-issue AR Sandbox in the Indian Treaty Room in the Eisenhower Executive Office Building, pre-exhibition.

Figure 1: Mark I standard-issue AR Sandbox in the Indian Treaty Room in the Eisenhower Executive Office Building, pre-exhibition.

Continue reading

For Science!

I’ve been busy finalizing the upcoming 4.0 release of the Vrui VR toolkit (it looks like I will have full support for Oculus Rift DK2 just before it is obsoleted by the commercial version, haha), and needed a short break.

So I figured I’d do something I’ve never done before in VR, namely, watch a full-length theatrical movie. I’m still getting DVDs from Netflix like it’s 1999, and I had “Avengers: Age of Ultron” at hand. The only problem was that I didn’t have a VR-enabled movie player.

Well, how hard can that be? Not hard at all, as it turns out. I installed the development packages for the xine multimedia framework, browsed through their hacker’s guide, figured out where to intercept audio buffers and decoded video frames, and three hours later I had a working prototype. A few hours more, and I had a user interface, full DVD menu navigation, a scrub bar, and subtitles. In 737 lines of code, a big chunk of which is debugging output to trace the control and data flow of the xine library. So yeah, libxine is awesome.

Then it was time to pull the easy chair into the office, start VruiXine, put on the Rift, map DVD navigation controls to the handy SteelSeries Stratus XL bluetooth gamepad they were giving away at Oculus Connect2, and relax (see Figure 1).

Figure 1: The title menu of the “Avengers: Age of Ultron” DVD in a no-frills VR movie player (VruiXine). Fancy virtual environments are left as an exercise for the reader.

Continue reading

On the road for VR: Silicon Valley Virtual Reality Conference & Expo

Yesterday, I attended the second annual Silicon Valley Virtual Reality Conference & Expo in San Jose’s convention center. This year’s event was more than three times bigger than last year’s, with around 1,400 attendees and a large number of exhibitors.

Unfortunately, I did not have as much time as I would have liked to visit and try all the exhibits. There was a printing problem at the registration desk in the morning, and as a result the keynote and first panel were pushed back by 45 minutes, overlapping the expo time; additionally, I had to spend some time preparing for and participating in my own panel on “VR Input” from 3pm-4pm.

The panel was great: we had Richard Marks from Sony (Playstation Move, Project Morpheus), Danny Woodall from Sixense (STEM), Yasser Malaika from Valve (HTC Vive, Lighthouse), Tristan Dai from Noitom (Perception Neuron), and Jason Jerald as moderator. There was lively discussion of questions posed by Jason and the audience. Here’s a recording of the entire panel:

One correction: when I said I had been following Tactical Haptics‘ progress for 2.5 years, I meant to say 1.5 years, since the first SVVR meet-up I attended. Brainfart. Continue reading

Archaeologists use LiDAR to find lost cities in Honduras

I wasn’t able to talk about this before, but now I guess the cat’s out of the bag. About two years ago, we helped a team of archaeologists and filmmakers to visualize a very large high-resolution aerial LiDAR scan of a chunk of dense Honduran rain forest in the CAVE. Early analyses of the scan had found evidence of ruins hidden under the foliage, and using LiDAR Viewer in the CAVE, we were able to get a closer look. The team recently mounted an expedition, and found untouched remains of not one, but two lost cities in the jungle. Read more about it at National Geographic and The Guardian. I want to say something cool and Indiana Jones-like right now, but I won’t.

Figure 1: A “were-jaguar” effigy, likely representing a combination of a human and spirit animal, is part of a still-buried ceremonial seat, or metate, one of many artifacts discovered in a cache in ruins deep in the Honduran jungle.
Photograph by Dave Yoder, National Geographic. Full-resolution image at National Geographic.

Continue reading

The effectiveness of minimalist avatars

I was reminded today of a recent thread on the Oculus subreddit, where a redditor relayed his odd experience remotely viewing his father driving a simulated racecar:

“I decided to spectate a race he was in. I then discovered I could watch him race from his passenger seat. in VR. in real time. I can’t even begin to explain the emotions i was feeling sitting in his car, in game, watching him race. I was in the car with him. … I looked over to ‘him’ and could see all his steering movements, exactly what he was doing. I pictured his intense face as he was pushing for 1st.”

I don’t know if this effect has a name, or even needs one, but it parallels something we’ve observed through our work with Immersive 3D Telepresence:
Continue reading

Messing around with 3D video

We had a couple of visitors from Intel this morning, who wanted to see how we use the CAVE to visualize and analyze Big Datatm. But I also wanted to show them some aspects of our 3D video / remote collaboration / tele-presence work, and since I had just recently implemented a new multi-camera calibration procedure for depth cameras (more on that in a future post), and the alignment between the three Kinects in the IDAV VR lab’s capture space is now better than it has ever been (including my previous 3D Video Capture With Three Kinects video), I figured I’d try something I hadn”t done before, namely remotely interacting with myself (see Figure 1).

Figure 1: How to properly pat yourself on the back using time-delayed 3D video.

Continue reading