A HoloArticle

Here is an update on my quest to stay on top of all things “holo:” HoloLamp and RealView “Live Holography.” While the two have really nothing to do with each other, both claim the “holo” label with varying degrees of legitimacy, and happened to pop up recently.

HoloLamp

At its core, HoloLamp is a projection mapping system somewhat similar to the AR Sandbox, i.e., a combination of a set of cameras scanning a projection surface and a viewer’s face, and a projector drawing a perspective-correct image, from the viewer’s point of view, onto said projection surface. The point of HoloLamp is to project images of virtual 3D objects onto arbitrary surfaces, to achieve effects like the Millenium Falcon’s holographic chess board in Star Wars: A New Hope. Let’s see how it works, and how it falls short of this goal.

Creating convincing virtual three-dimensional objects via projection is a core technology of virtual reality, specifically the technology that is driving CAVEs and other screen-based VR displays. To create this illusion, a display system needs to know two things: the exact position of the projection surface in 3D space, and the position of the viewer’s eyes in the same 3D space. Together, these two provide just the information needed to set up the correct perspective projection. In CAVEs et al., the position of the screen(s) is fixed and precisely measured during installation, and the viewer’s eye positions are provided via real-time head tracking.

As one goal of HoloLamp is portability, it cannot rely on pre-installation and manual calibration. Instead, HoloLamp scans and creates a 3D model of the projection surface when turned on (or asked to do so, I guess). It does this by projecting a sequence of patterns, and observing the perspective distortion of those patterns with a camera looking in the projection direction. This is a solid and well-known technology called structured-light 3D scanning, and can be seen in action at the beginning of this HoloLamp video clip. To extract eye positions, HoloLamp uses an additional set of cameras looking upwards to identify and track the viewer’s face, probably using off-the-shelf face tracking algorithms such as the Viola-Jones filter. Based on that, the software can project 3D objects using one or more projection matrices, depending on whether the projection surface is planar or not. The result looks very convincing when shot through a regular video camera:

HoloLamp’s primary problem is that the result only looks convincing when shot through a regular video camera — the root cause being that its projection is monoscopic, i.e., from a single viewpoint. Unlike video cameras, (most) humans have two eyes and a keen sense of stereopsis. When viewed with two naked eyes, HoloLamp’s illusion will fall apart. All monocular depth cues (primarily perspective and motion parallax) will tell the viewer that there are 3D objects floating above the projection surface, but the stereoscopic depth cues (vergence and binocular parallax) will give away that the viewer is still looking at a flat image. This is not just theory — I have done this experiment in the CAVE many times, usually to win arguments about the importance of stereoscopy. With stereo, there are highly convincing 3D objects, one moment later after turning stereo off, there is nothing.

It’s somewhat worse than looking at a standard non-head tracked 2D projection, because the monoscopic and stereoscopic depth cues are actively fighting each other, confusing the viewer in the process. The developers address this issue in a funny way in a YouTube comment reply: “This is NOT a stereo effect its one that works even when you have one eye shut so it records on camera perfectly.” They’re missing the flip side, namely that it only works when you have one eye shut or are recording on camera.

An extension of this issue is that HoloLamp also does not work for multiple users, as it can only create one viewpoint shared by all of them. Meaning, do not expect to be able to put a HoloLamp on a table between you and a friend and play holographic chess. I’d wager you’d get a better effect from using a tablet computer running a regular 2D or 3D chess game.

I am not trying to dismiss HoloLamp or the impressive technology behind it — after all, the AR Sandbox is based on that same technology — but trying to rein in unrealistic expectations. While it has many potential applications, HoloLamp is not a holographic projector. Or, in other words, try before you buy.

RealView Holographic Augmented Reality

The other new holo-thing is an announcement by RealView about turning their previous desktop holographic display into an augmented reality headset, pointed out to me by Road To VR‘s Ben Lang. This new and apparently yet-unnamed thing is similar to Magic Leap‘s AR efforts in two big ways: one, it aims to address the issue of vergence-accommodation conflict inherent in current VR headsets such as Oculus Rift or Vive, or AR headsets such as Microsoft’s HoloLens; and two, we know almost no details about it. Here they explain vergence-accommodation conflict:

Note that there is a mistake around the 1:00 minute mark: while it is true that the image will be blurry, it will only split if the headset is not configured correctly. Specifically, that will not happen with HoloLens when the viewer’s inter-pupillary distance is dialed in correctly.

Unlike HoloLamp and pretty much everybody else using the holo- prefix or throwing the term “hologram” around, RealView vehemently claims their display is based on honest-to-goodness real interference-pattern based holograms, of the computer-generated variety. To get this out of the way: yes, that stuff actually exists. Here is a Nature article about the HoloVideo system created at MIT Media Lab.

The remaining questions are how exactly RealView creates these holograms, and how well a display based on holograms will work in practice. Unfortunately, due to the lack of known details, we can only speculate. And speculate I will. As a starting point, here is a demo video, allegedly shot through the display and not post-processed:

I say allegedly, but I do believe this to be true. The resolution is surprisingly high and quality is surprisingly good, but the degree of transparency in the virtual object (note the fingers shining through) is consistent with real holograms (which only add to the light from the real environment shining through the display’s visor).

There is one peculiar thing I noticed on RealView’s web site and videos: the phrase “multiple or dynamic focal planes.” This seems odd in the context of real holograms, which, being real three-dimensional images, don’t really have focal planes. Digging a little deeper, there is a possible explanation. According to the Wikipedia entry for computer-generated holography, one of the simpler algorithms to generate the required interference patterns, Fourier transform, is only able to create holograms of 2D images. Another method, point source holograms, can create holograms of arbitrary 3D objects, but has much higher computational complexity. Maybe RealView does not directly create 3D holograms, but instead projects slices of virtual 3D objects onto a set of image planes at different depths, creates interference patterns for the resulting 2D images using Fourier transform, and then composes the partial holograms into a multi-plane hologram. I want to reiterate that this is mere speculation.

This would literally create multiple focal planes, and allow the creation of dynamic focal planes depending on application or interaction needs, and could potentially explain the odd language and the high quality of holograms in above video. The primary downside of slice-based holograms would be motion parallax: in a desktop system, the illusion of a solid object would break down as the viewer moves laterally to the holographic screen. Fortunately, in head-mounted displays that screen is bolted to the viewer’s head, solving the problem.

So while RealView’s underlying technology appears legit, it is unknown how close they are to a real product. The device used to shoot above video is never shown or seen, and a picture from the web site’s medical section shows a display that is decidedly not head-mounted. I believe all other product pictures on the web site to be concept renders, some of them appearing to be (poorly) ‘shopped stock photos. There are no details on resolution, frame rate, brightness or other image specs, and any mention of head tracking is suspiciously absent. Even real holograms need head tracking to work if the holographic screen is moving in space by virtue of being attached to a person’s head. Also, the web site provides no details on the special scanners that are required for real-time direct in-your-hand interaction.

Finally, there is no mention of field of view. As HoloLens demonstrates, field of view is important for AR, and difficult to achieve. Maybe this photo from RealView’s web site is a veiled indication of FoV:

Is this the field of view of RealView's holographic augmented reality headset?

Is this the field of view of RealView’s holographic augmented reality headset?

I’m just kidding, don’t be mad.

In conclusion, while we know next to nothing definitive about this potential product, computer-generated holography is a thing that really exists, and AR displays based on it could be contenders. Details remain to be seen, but any advancements to computer-generated holography would be highly welcome.

11 thoughts on “A HoloArticle

  1. Question. is ipd still a factor for a holographic headset? wouldn’t it just be a single screen presented in front of your face since its already producing all the relevant depth cues?

    • Great question. No, IPD (and eye relief / screen size) are required by standard 3D graphics and stereoscopic headsets to generate the necessary projection matrices to create the appropriate views. Real holograms are based on an entirely different principle that requires neither.

      Holographic headsets still need head tracking to create the illusion that the generated holographic images are static with respect to the real world.

    • I think it depends on the display system. If it is a projection with a single combiner, then the hologram will probably be fine independent of IPD as both eyes are viewing the same (5D?) image.
      If the holograms are to be optimised, as far as I have read, then they should be individual displays carefully aligned to each eye, should re-construct the light field only in the current pupil entrance (sub-holograms) and should approximate as planar for better quality because of the non-infinitesimal pixel size of the display (also stressed as necessary for suppressing noise in other light field types like tensor field.

      Besides RealView, SeeReal have a lot of patents in the area and are very much concerned with eye-tracking to construct limited light fields. Daqri have also acqui-hired a holographic HUD company who claimed the ability to analytically solve a holographic reconstruction within a known time for a desired resolution. Akonia may have some interesting ideas in digital holography if they decide to branch out from holographic waveguides (they previously worked on digital holographic storage).

  2. I’m glad this article addressed the lack of stereo imaging with the HoloLamp. Having mocked up similar things to Johnny Lee’s Wiimote head tracking in the past I can testy to it being massively disorienting in person compared to being filmed. It’s almost comical the reply from the developers on YouTube regarding this point!

    From asking a non technical college their opinion on the video/technology it also empathises the marketing angle the video has taken. Explaining how the mocked up two player scenario would actually function (I didn’t bother to try an go into the stereo imaging problems) they definitely felt mislead after the initial excitement of their assumed possibilities of the hardware..

    I also picked up on the mistake at the 1 minute mark of the RealView demo regarding double imaging with the Hololens. I actually found this error to be slightly hypocritical if you consider that the RealView falls victim to one of the same 3D imaging mismatch issues that the Hololens does suffer – that your hand will always be BEHIND an object.

    It’s clear in the video they do their best to avoid showing this problem, but it’s obviously unresolved as it can be seen happening. Saying that, I do concede that it’s a problem with every AR system in the current state of the art and resolving it (to level of accuracy that matches other aspects of these systems) is a huge problem in itself.

  3. Is it just my impression or does HoloLamp’s projection fails to take in consideration perspective distortion? (kinda looks like it rotates things, but doesn’t make things bigger when they’re closer)

    • It would take extra work to ignore perspective, so I do think they’re setting it up correctly. The issue might be that they are using face detection on 2D cameras to get the viewer’s head position, and while that algorithm will give good values for x and y (in camera coordinates), it will yield poor estimates for z.

      I had a student working on this exact problem, but with a RealSense RGBD camera. He used face detection in the color camera stream to find the viewer’s eyes in color camera space, and then looked up the corresponding z values in depth camera space to get good z estimates.

Leave a Reply

Your email address will not be published. Required fields are marked *