A New AR Sandbox Support Forum

Apparently, the AR Sandbox is still a thing and going strong after ten years, with over 850 registered installations world-wide according to the AR Sandbox World Map. There was a lull in new installations and community activity during the initial COVID-19 lockdowns, but things are picking up again, and with that I am seeing an increasing amount of requests for help arriving in my personal email.

An AR Sandbox.

The old AR Sandbox support forum, which was quite active and significantly reduced my support load, not only by allowing me to answer common questions only once instead of dozens of times, but also by community members directly helping each other, unfortunately went down due to hardware problems a good while ago, and there is currently no avenue of getting it back up.

So I decided to create a new AR Sandbox support forum on this here web site, as a hopefully temporary replacement. I was not able to move over any of the old forum content due to not having access to the original database files, which is a major pity because there was a ton of helpful stuff on there. I am hoping that the new forum will accumulate its own set of helpful stuff quickly, and if/when I migrate the forum to a permanent location, I will be able to move all content because I have full access to this web site’s code and database. So here’s hoping.

This is the first forum on this web site, so I hope that things will work right from the start; if not, we’ll figure out how to fix it. Please be patient.

And as a quick reminder: These are the only official AR Sandbox installation instructions. Accept no substitutes.

Is TCP really that slow?

I’m still working on Vrui’s second-generation collaboration / tele-presence infrastructure (which is coming along nicely, thankyouverymuch), and I also recently started working with another group of researchers who are trying to achieve similar goals, but have some issues with their own home-grown network system, which is based on Open Sound Control (OSC). I did some background research on OSC this morning, and ran into several instances of an old pet peeve of mine: the relative performance of UDP vs TCP. Actually, I was trying to find out whether OSC communicates over UDP or TCP, and whether there is a way to choose between those at run-time, but most sources that turned up were about performance (it turns out OSC simply doesn’t do TCP).

Here are some quotes from one article I found: “I was initially hoping to use UDP because latency is important…” “I haven’t been able to fully test using TCP yet, but I’m hopeful that the trade-off in latency won’t be too bad.”

Here are quotes from another article: “UDP has it’s [sic] uses. It’s relatively fast (compared with TCP/IP).” “TCP/IP would be a poor substitute [for UDP], with it’s [sic] latency and error-checking and resend-on-fail…” “[UDP] can be broadcast across an entire network easily.” “Repeat that for multiple players sharing a game, and you’ve got a pretty slow, unresponsive game. Compared to TCP/IP then UDP is fast.” “For UDP’s strengths as a high-volume, high-speed transport layer…” “Sending data via TCP/IP has an ‘overhead’ but at least you know your data has reached its destination.” “… if the response time [over TCP] was as much as a few hundred milliseconds, the end result would be no different!”

Continue reading

How to Track Glowing Balls in 3D

Note: I started writing this article in June 2017, because people kept asking me about details of the PS Move tracking algorithm I implemented for the video in Figure 1. But I never finished it because I couldn’t find the time to do all the measurements needed for a thorough error analysis, and also because the derivation of the linear system at the core of the algorithm really needed some explanatory diagrams, and those take a lot of work. So the article stayed on the shelf. I’m finally publishing it today, without error analysis or diagrams, because people are still asking me about details of the algorithm, more than four years after I published the video. 🙂

This one is long overdue. Back in 2015, on September 30th to be precise, I uploaded a video showing preliminary results from a surprisingly robust optical 3D tracking algorithm I had cooked up specifically to track PS Move controllers using a standard webcam (see Figure 1).

Figure 1: A video showing my PS Move tracking algorithm, and my surprised face.

During discussion of that video, I promised to write up the algorithm I used, and to release source code. But as it sometimes happens, I didn’t do either. I was just reminded of that by an email I received from one of the PS Move API developers. So, almost two years late, here is a description of the algorithm. Given that PSVR is now being sold in stores, and that PS Move controllers are more wide-spread than ever, and given that the algorithm is interesting in its own right, it might still be useful. Continue reading

KeckCAVES On Mars, pt. Oh-I-lost-count

Last weekend, we had yet another professional film crew visiting us to shoot video about our involvement in NASA’s still on-going Mars Science Laboratory (MSL, aka Curiosity rover) mission. This time, they were here to film parts of an upcoming 90-minute special about Mars exploration for the National Geographic TV channel. Like last time, the “star” of the show was Dawn Sumner, faculty in the UC Davis Department of Earth and Planetary Sciences, one of the founding members of KeckCAVES, and member of the MSL science team.

Unlike last time, we did not film in the KeckCAVES facility itself (due to the demise of our CAVE), but in the UC Davis ModLab. ModLab is part of an entirely different unit — UC Davis’s Digital Humanities initiative — but we are working closely with them on VR development, they have a nice VR environment consisting of two HTC Vive headsets and a large 4.2m x 2.4m screen with a ceiling-mounted ultra-short throw projector (see Figure 1), their VR hardware is running our VR software, and they were kind enough to let us use their space.

Figure 1: Preparation for filming in UC Davis’s ModLab, showing its 4.2m x 2.4m front-projected screen and ceiling-mounted ultra-short throw projector, and two Lighthouse base stations.

The fundamental idea here was to use several 3D models, created or reconstructed from real data sent back either by satellites orbiting Mars or by the Curiosity rover itself, as backdrops to let Dawn talk about the goals and results of the MSL mission, and her personal involvement in it. Figure 1 shows a backdrop in the real sense of the word, i.e., a 2D picture (a photo taken by Curiosity’s mast camera) with someone standing in front of it, but that was not the idea here (we didn’t end up using that photo). Instead, Dawn was talking while wearing a VR headset and interacting with the 3D models she was talking about, with a secondary view of the virtual world, from the point of view of the film camera, shown on the big screen behind her. More on that later. Continue reading

New Adventures in Hi-Fi

I’ve been spending all of my time over the last few weeks completely rewriting Vrui‘s collaboration infrastructure (VCI from now on), from scratch. VCI is, in a nutshell, the built-in remote collaboration / tele-presence component of my VR toolkit. Or, in other words, a networked multi-player framework. The old VCI was the technology underlying videos such as this one:

Figure 1: Collaborative exploration of a 3D CAT scan of a microbial community, between a CAVE and a 3D TV with head-tracked glasses and a tracked controller.

Continue reading

Set-up Instructions for Vrui with HTC Vive Head-mounted Display

It’s been more than two years since the last time I posted set-up instructions for Vrui and HTC Vive, and a lot has changed in the meantime. While Vrui-5.0 and its major changes are still not out of the kitchen, the current release of Vrui, Vrui-4.6-005, is stable and works very well with the Vive. The recent demise of our CAVE, and our move towards VR headsets until we figure out how to fix it, have caused a lot of progress in Vrui’s set-up and user experience. The rest of this article contains detailed installation and set-up instructions, starting from where my previous step-by-step guide, “An Illustrated Guide to Connecting an HTC Vive VR Headset to Linux Mint 19 (“Tara”),” left off.

If you did not follow that guide and its prerequisite, “An Illustrated Guide to Installing Linux Mint 19 (“Tara”),” this one assumes that you already have:

  • a “gaming” or “VR ready” PC with a powerful Nvidia GeForce graphics card,
  • a full installation of a 64-bit Ubuntu-based Linux operating system, e.g., Ubuntu or Linux Mint, with the MATE desktop environment,
  • proprietary drivers for the Nvidia graphics card installed and working,
  • head-mounted display filtering disabled in the graphics card driver,
  • and a working installation of SteamVR.

If you use a Linux distribution that is not Ubuntu-based, such as my own favorite, Fedora, or another desktop environment such as Gnome Shell or Cinnamon, you will have to make some adjustments throughout the rest of this guide.

This guide also assumes that you have already set up your Vive virtual reality system, including its tracking base stations, and that your Vive headset is connected to your PC via HDMI and USB (I will publish a detailed illustrated guide on that part soon-ish). Continue reading

A Blast From The Past

Back in the olden days, in the summer of 1996 to be precise, I was a computer science Master’s student at the University of Karlsruhe, Germany, about to take the oral exam in my specialization area, 3D computer graphics, 3D user interfaces, and geometric modeling. For reasons that are no longer entirely clear to me, I decided then that it would be a good idea to prepare for that exam by developing a 3D rendering engine, a 3D game engine, and a game, all from scratch. What resulted from that effort — which didn’t help my performance in that exam at all, by the way — was “Starglider Pro:”

In the mid to late 80s, one of my favorite games on my beloved Atari ST was the original Starglider, developed by Jez San for Rainbird Software. I finally replaced that ST with a series of PCs in 1993, first running DOS, and later OS/2 Warp, and therefore needed something to scratch that Starglider itch. Continue reading

3D Camera Calibration for Mixed-Reality Recording

Mixed-reality recording, i.e., capturing a user inside of and interacting with a virtual 3D environment by embedding their real body into that virtual environment, has finally become the accepted method of demonstrating virtual reality applications through standard 2D video footage (see Figure 1 for a mixed-reality recording made in VR’s stone age). The fundamental method behind this recording technique is to create a virtual camera whose intrinsic parameters (focal length, lens distortion, …) and extrinsic parameters (position and orientation in space) exactly match those of the real camera used to film the user; to capture a virtual video stream from that virtual camera; and then to composite the virtual and real streams into a final video.

Figure 1: Ancient mixed-reality recording from inside a CAVE, captured directly on a standard video camera without any post-processing.

Continue reading

AltspaceVR Shutting Down

AltspaceVR, the popular virtual reality social platform, and the eponymous company behind it, will be closing their respective doors on August 3rd. This is surprising, as AltspaceVR has been around since 2013, was well-funded, had a good amount of users given VR’s still-niche status, and had apparently more funding lined up to continue operation and development of their platform (that funding falling through was, according to the announcement linked above, the primary reason for the impending shut-down).

But besides the direct impact on commercial VR as a whole, and the bad omen of a major player closing down, this is also personal to me. Not as a user of AltspaceVR’s service — I have to admit I’ve only tried it for minutes at a time at trade shows or conferences — but as someone who was, albeit tangentially, involved with the company and the people working there.

After having given a presentation at an early SVVR meet-up, I invited SVVR’s founder, Karl Krantz, to visit me at my VR lab at UC Davis. He made the trip a short while later, and brought a few friends, including “Cymatic” Bruce Wooden, Eric Romo, and Gavan Wilhite. I showed them our array of VR hardware, the general VR work we were doing, and specifically our work in VR tele-presence and remote collaboration. According to the people involved, AltspaceVR was founded during the drive back to the Bay Area.

In addition, I co-advised one of AltspaceVR’s developers when he was a PhD student at UC Davis, and I visited them in the summer of 2015 to give a talk about input device and interaction abstraction in multi-platform VR development. During that visit, Eric Romo also gave me my first taste of the newly-released HTC Vive Development Kit (Vive DK1).

For all that, I am sad to see them go under, and I wish everybody who is currently working there all the best for their future endeavors.

Possibly related to this, another piece of news surfaced today: AltspaceVR was named defendant in a patent infringement lawsuit filed by Virtual Immersion Technologies, LLC, regarding this 2002 patent. I do not know whether this filing was a cause in AltspaceVR’s closing, but it is possible that the prospect of a costly court case, or stiff licensing fees, led to some investors getting cold feet.

Either way, this patent deserves closer scrutiny as it is quite broad, and has recently changed ownership from the original inventors to the plaintiff, who has so far been using it exclusively to sue VR companies for infringement. The fact that it specifically claims the use of video to represent performers or users in a shared virtual space might mean that it covers platforms such as our tele-collaboration framework, which would be unfortunate. I have a hunch that this patent, due to its arguably broad applicability, will be the subject of a major legal battle in the near future, and while there is a lot of prior art in multiplayer/multi-user VR, that video component means I cannot dismiss the patent out of hand.

VR medical visualization with 3D Visualizer

Now that Vrui is working on the HTC Vive (at least until the next SteamVR update breaks ABI again), I can finally go back and give Vrui-based applications some tender loving care. First up is 3D Visualizer, an application to visualize and, more importantly, visually analyze three-dimensional volumetric data sets (see Figure 1).

Figure 1: Analyzing a CAT scan with 3D Visualizer on the HTC Vive. Cat included.

Continue reading