I’ve been waiting for this for such a long time: a turn-key stereoscopic display with built-in pre-calibrated head tracking and tracked input device. We’ve been in the low-cost VR business for more than four years now, but the biggest problem is that our reference design is entirely DIY. Users have to go out and buy individual components, assemble them, and then — most importantly — calibrate them with respect to each other. This calibration step is the biggest hurdle for low-cost VR’s acceptance, because the idea behind it is somewhat hard to understand for VR non-experts, and even if it’s understood, it still requires expensive non-standard tools.
The solution, of course, is simple: instead of having the display and tracking system as separate entities that need to be calibrated with respect to each other, integrate them into the same frame, and pre-calibrate them at the factory. The only thing that had to happen was for a manufacturer to step up to the plate and make it so.
Voilá, I present the zSpace holographic display (see Figure 1).
The hardware
These are the display’s hardware components:
- A 24″ stereoscopic flat-panel display with 1920×1080 resolution.
- Infrared tracking cameras built directly into the display’s frame.
- 6-DOF (position+orientation) optically-tracked stereo glasses.
- 6-DOF optically-tracked stylus with button(s) to trigger events.
First off, some might object to the company’s marketing materials referring to this as a “holographic display.” I, for one, don’t. According to my own definition, it is indeed holographic (refer to this post for a detailed explanation of how head-tracking and stereoscopy make holographic displays). Holography is a medium to present virtual 3D objects that can be seen from arbitrary points of view, not just one specific technology to create such a medium, namely light field projection through diffraction of interference images taken with a coherent (laser) light source. For a user wearing the head-tracked stereo glasses, this looks exactly like a (high-resolution, high-constrast, colorful) hologram. If it quacks like a duck…
The applications
So why is this exciting? All Vrui-based software is ready-made to run on it, and use the display to its fullest potential. We already know that holography (in its head-tracked stereoscopic implementation) adds tremendously to the effectiveness and efficiency of 3D visualization and modeling applications, and that the additional presence of 6-DOF tracked input devices adds even more. Here is a video showing a modeling application that would be used exactly the same on the zSpace, only with a smaller but higher-resolution working space.
So now, finally, there’s a system that we can “sell,” in the sense that we can tell our users simply to go out and buy the hardware, plug it in, and start using it. This will make our software much better for the majority of users that cannot afford, or do not want to, build a low-cost DIY system or a high-end system such as a CAVE.
Will it play games?
I anticipate that this will be the most common question at first. Sure it can play games, but being a holographic display, it won’t play most games very well. This is because in a holographic display, field-of-view is not a free parameter. The display’s virtual field-of-view is exactly its real field-of-view, at all times (see this post for more details). There’s no way to futz with a FOV value, like it’s so commonly done in first-person games. The display literally serves as a window into a virtual world, meaning the FOV is determined entirely by how far away the viewer sits. Just like a real window: if you want to see more, you have to step closer. The zSpace is 24″ across, and let’s assume that a reasonable viewer would sit at least 24″ back from it. Then, the diagonal field-of view is — simple geometry — 53.13°. Typical first-person games use FOVs upwards of 90°. Meaning that you’ll have tunnel vision, and you can’t look around because the display is sitting in front of you. We have tried running architectural applications very similar to first-person games on a small-screen (55″) holographic display, and it just doesn’t work well.
The applications for a display like this are almost opposite of those for an HMD, say the Oculus Rift. The latter has a full-sphere field-of-view because the screens are attached to your eyes, and head tracking takes care of the rest. So games where you hold small things in your hand (say, a virtual Rubik’s Cube), would work perfectly with the zSpace, and not so well with the Rift. Things like first-person shooters are exactly the other way around.
Oh, and it’s a bit too expensive even for serious gamers. The display is (according to what I’ve heard) around $3500, and you’ll need a $2000 graphics card to run it (for no real reason, I might add).
Outlook and concerns
If this display works as advertised, it could very well be a game changer. Not in the sense that it provides technologies or approaches that didn’t exist before (at a technical level, it is exactly the same as this low-cost VR environment), but in the sense that it makes them available to a much larger potential audience, which in turn increases demand for software applications that make proper use of such displays. The potential impact is huge.
There are still a lot of unkowns: will it work with Linux and/or Mac OS X (so far there’s only a Windows SDK)? Does the SDK offer correct abstractions to develop proper applications, or is it too low-level (probably too low-level)? Will it only work with expensive professional-level graphics cards such as Nvidia Quadros, or also with consumer-level graphics cards (there’s no reason it couldn’t)? What’s the end-to-end latency for the head tracker? How good is the stereo quality (there is some ominous-sounding advice on minimizing cross-talk in the “getting started” document)? And, somewhat most importantly, how much will it actually cost in the end?
With all those in mind, I’m excited for now.
While we’re on the topic of holography…
But here’s one thing that grinds my gears, and it doesn’t only go to the zSpace marketroids, but everyone: look at Figure 1 again. Really look at it. Now, marketing people, can you please cut that out?
What am I talking about? The virtual 3D object floating above the screen, of course. That is not how holography works, laser-based or otherwise. I know you’re only trying to get the idea across of what it’s like to work with a display like that, but you’re seriously confusing people. I’ve argued with too many who actually believed that there is such a thing as a completely free-standing hologram (some even trying to build businesses on the idea). Completely free-standing holograms, or “holographic projectors” like those in Star Wars, are pure fiction. Here is what Figure 1 would look like in real life, to an observer standing from where the picture was taken:
The point is that the virtual 3D objects are only visible in places where there is a display screen behind them. The part that’s inside the screen boundary will still appear to float above the screen, but it will be inexplicably cut off by the screen border that’s actually behind it. This messes with our visual system, and avoiding it is the reason why CAVEs use large screens, nothing else.
Now, from the point of view of the person in the picture, the display will indeed look like Figure 1. But with no explanation or disclaimers anywhere around, some people actually do believe that Figure 1 is a real photograph of an actual hologram, and then get bent out of shape when they see the real thing. Rant over.
Now, can I have one, please?
Thanks for the insightful write-up! It’s good to see someone digging beneath the hype. Will you post a follow-up once you get one?
I’ll definitely try to get my hands on one. The problem is that my software only runs on Linux and Mac OS X, and the zSpace currently only works with Windows. If everything else fails, I’ll have to give it the Kinect treatment and reverse-engineer the USB protocols.
Pingback: How head tracking makes holographic displays | Doc-Ok.org
Regarding changing FOV, couldn’t you apply that transformation that happens when you’re moving at significant fractions of the speed of light (minus the red/blue-shifting of course)? Or perhaps somthing like converting the vertexes coordinates from cartesian to spherical (with at the camera, and the line pointing straight forward, on both systems), then temporarily treating them as cartesian again without converting back, scale all horizontal and vertical coordinates by some factor and the depth by 1/that factor, and then convert back from spherical to cartesian, and then finally render the result the usual way (that is different from the close-to-lightspeed transformation, isn’t it?)?
In my head (without having done any actual coding to verify) both approaches seem like they should work just fine, as long as you keep the “forward” vector of the calculations aligned with the user’s head’s “forward” direction (but coplanar neighbouring faces should be smooth shaded to not emphasize their polygons, and lowpoly flat areas (and big tris/quads in general) might not look right from very close, specially at more extreme FOVs. And you might need to use some trickery (perhaps somthing like feeding the distance from the origin instead of just the depth into the z-buffer or somthing like doing some culling while in spherical coordinates) for FOVs bigger than 180, in order to avoid artifacts from faces behind the camera with vertexes on both positive and negative sides).
ps: I default to thinking of cartesian coordinates in terms of , and spherical coordinates in terms of (so that for relatively small values both coordinates systems seem to almost match), you might need to tweak a couple of things if your code uses a different order.
Lol, lazy antihack thingy thought stuff between less-than and more-than symbols were tags and wiped them out. Lemme try with square brackets:
ps: I default to thinking of cartesian coordinates in terms of [horizontal, vertical, depth] , and spherical coordinates in terms of [longitude, latitude, distance from origin] (so that for relatively small values both coordinates systems seem to almost match), you might need to tweak a couple of things if your code uses a different order.
I just realized I phrased the article poorly. When I said “there’s no way to futz with FOV,” I didn’t mean it can’t be done. I meant it must not be done, or your viewers get sick.
Increasing the FOV in software is actually really easy, you just need to scale down the distance from the viewer to the screen (in the screen+viewer camera model). That virtually pulls the viewer closer, and widens the FOV just as if you really moved closer. To go beyond 180 degrees, though, you need to do deep trickery like you describe.
The problem is perceptual. Once the virtual FOV doesn’t match the real FOV, objects don’t appear solid any longer, which undercuts the entire idea of, and the benefits gained from, holographic display. The effect is subtle, but noxious. Most people don’t consciously notice wrong FOV, but it makes them sick. It’s the main cause for motion sickness in desktop or console first-person video games.
I remember the brouhaha around the release of Half-Life 2, where the FOV suddenly changes when you enter or exit a vehicle. It made a lot of people very ill, to the point where Valve had to reduce the effect significantly. It’s even worse when you add stereo to the mix, because then your brain gets really confused.
Wouldn’t it look similar to looking at a big mirrored ball?
Yeah, kind of. I would say more like looking through a fish-eye lens, but that’s details.
In general, the problem is how would you feel walking around all day with a mirrored ball (or a fish-eye lens) in front of your eyes? 🙂
A while back, right after my eye surgery, before my vision quality stabilized, i tried using cheap lenses to not need to pay the full price for good lenses that would need to be changed in a month or so; because they were cheap and my vision quality was still quite bad, they were thick like old glass soda bottles and had quite a bit of distortion (not to mention a bit of rainbowing close to the edges). At first i loved having such an increased field-of-view (staring forward i could even catch a glimpse of things slightly behind my eyes), and i stopped seeing straight lines as curved surprisingly quickly, same thing with the chromatic aberrations; but after a couple of days the headaches and overall sick feeling quickly grew too much and i had to get the more expensive lenses anyway.
But when i was younger, there was a time when i would often go out with a big plastic mirrored x-mass ball on a homemade mount attached to my camera to take panoramic pictures (after coming back home the ball in the photos was unwrapped with software producing an almost 360×180 full panoramic); and i would often play around just staring at the ball and turning in different directions and walking around; in this case i never got any headaches or sick, and the objects in the ball looked pretty solid (specially the closer ones).
Perhaps not all distortions that increase field of view make you feel bad after a while?
I’m rapidly exiting my area of expertise, but that’s a good question. The human visual system is very malleable (see the experiments with image-inverting prisms worn as glasses), and that might explain some of that. I have not seen any ways to adjust FOV of holographic displays without interfering with the perception of virtual objects as real or causing motion sickness (both effects are bad), but that doesn’t mean they don’t exist. I will have to look into that. Until then, my gut tells me (no pun intended) that it can’t be done in a general way, though (which doesn’t invalidate your personal observations).
Pingback: On the road for VR: zSpace developers conference | Doc-Ok.org
Pingback: Happy birthday, doc-ok.org! | Doc-Ok.org