What is holographic, and what isn’t?

Microsoft just announced HoloLens, which “brings high-definition holograms to life in your world.” A little while ago, Google invested heavily in Magic Leap, who, in their own words, “bring magic back into the world.” A bit longer ago, CastAR promised “a magical experience of a 3D, holographic world.” Earlier than that, zSpace started selling displays they used to call “virtual holographic 3D.” Then there is the current trailblazer in mainstream virtual reality, the Oculus Rift, and other, older, VR systems such as CAVEs.

Figure 1: A real person next to two “holograms,” in a CAVE holographic display.

While these things are quite different from a technical point of view, from a user’s point of view, they have a large number of things in common. Wouldn’t it be nice to have a short, handy term that covers them all, has a well-matching connotation in the minds of the “person on the street,” and distinguishes these things from other things that might be similar technically, but have a very different user experience?

How about the term “holographic?”

First off, the words “holography,” “hologram,” and “holographic image” have very precisely defined meanings. According to the dictionary (or Wikipedia or the Center for the Holographic Arts), none of the above things are remotely holographic.

  • Holography is a technique, based on coherent (laser) light and interference, to record and view images, that, when presented properly, precisely recreate the three-dimensional visual appearance of recorded real objects that are no longer there.
  • A hologram is the imprint on a recording medium (such as photographic film) that is created during holographic recording. The hologram itself is not an image (it doesn’t look anything like the recorded objects), and it is not three-dimensional, either.
  • A holographic image is the three-dimensional image that is reconstructed by shining (coherent) light onto a hologram. Some sources, and pretty much everyone who is not a holography expert, refers to the holographic images as “holograms” as well, somewhat confusingly. Based on the strict definition, the hologram is not the floaty 3D thing, but the flat transparent plate behind it (or in front of it).

That’s all nice and precise, but when non-experts think about holograms, they don’t think about lasers and interference, i.e., about the technology that was used to record and view them, but about the experience, i.e., the fact that holograms, or, more precisely, holographic images, are apparently solid three-dimensional objects floating in thin air. In other words, the most remarkable thing about holograms is not how they’re made, but the illusion they create.

And how do holographic images create the illusion of solid three-dimensional objects? That has nothing to do with lasers and interference patterns, but with how the visual system in our brains sees the physical world. When viewing close-by objects, there are six major depth cues that help us perceive three dimensions:

  1. Perspective foreshortening: farther away objects appear smaller
  2. Occlusion: nearer objects hide farther objects
  3. Binocular parallax / stereopsis: left and right eyes see different views of the same objects
  4. Monocular (motion) parallax: objects shift depending on how far away they are when head is moved
  5. Convergence: eyes cross when focusing on close objects
  6. Accommodation: eyes’ lenses change focus depending on objects’ distances

As it turns out, holographic images recreate all six of these cues (yes, even accommodation), perfectly fooling our brains into seeing things that aren’t really there. Based on this and the duck test, wouldn’t it make sense to call display systems that are not based on lasers and interference, but create the same illusion, “holographic” as well, given that they quack like real holographic images?

Let me be bold and make two further allowances for practical reasons: let’s ignore depth cue 6 for the time being, and limit the illusion of solid 3D objects to at least one viewer. Then let me propose the following definition:

A holographic display is a system that creates the visual illusion of solid three-dimensional objects by recreating depth cues 1 through 5 for at least one viewer at a time.

I argue that this definition is useful in the sense outlined above: it differentiates a large class of specific devices that have very similar visual capabilities from other devices that may be based on very similar technologies but look and feel different, while aligning well with the general public’s notion of what a hologram is. Let’s apply the test to a list of existing or proposed devices:

  • Reality: covers all possible depth cues for unlimited numbers of viewers at the same time. Clearly super-holographic.
  • Real holographic viewing system (hologram plus proper illumination): covers all six depth cues for any number of viewers. Holographic+.
  • Volumetric displays: depth cues 1-6 for any number of viewers, but small display volume, and some have problems with cue 2 (occlusion). Still, holographic+.
  • Project Vermeer: depth cues 1-6 for any number of viewers, but small display volume/viewing angle. Nonetheless, holographic+.
  • CAVE: depth cues 1-5 for a single viewer. Requires eye wear, but the definition (intentionally) says nothing about eye wear. By the way, here’s a video demonstrating depth cues 1, 2, and 4 in a CAVE (cues 3 and 5 are lost in recording with a 2D video camera):

  • Head-tracked 3D TV: same as CAVE, but smaller field of view (definition, again intentionally, doesn’t mention that).
  • zSpace and HP’s Zr Virtual Reality Display (which appears functionally identical): same as head-tracked 3D TV, but smaller field of view. Still holographic.
  • Head-tracked auto-stereoscopic display: covers depth cues 1-5 for a single viewer, does not require head gear. Holographic.
  • Oculus Rift DK2: depth cues 1-5 for a single viewer. Obviously requires pretty involved head gear, but holographic.
  • CastAR: depth cues 1-5 for a single viewer, and multiple viewers can potentially see the same virtual objects in the same place when wearing their own headsets. Holographic.
  • Magic Leap and HoloLens: if these really work as implied by marketing, they will cover at least depth cues 1-5, maybe even 6. Either way, if true then holographic.
  • Near-eye light field display: depth cues 1-3 and 5-6, and cue 4 if combined with some positional head tracking mechanism. In that case, holographic; otherwise, not.
  • Holovision: covers depth cues 1-6(!) for any number of viewers, but the only virtual object it can show is a flat display screen plus whatever 2D image happens to be on that screen. Squeaks by on a technicality (a flat screen is a three-dimensional object).
  • Multi-zone auto-stereoscopic display: provides only a crude approximation of depth cue 4, but can be viewed by more than one person at a time. It’s a borderline case, but I would argue it’s not holographic because severe parallax artifacts break the illusion of solid objects.
  • Oculus Rift DK1: misses depth cue 4 due to lack of positional head tracking. Not a holographic display.
  • GearVR: same as Oculus Rift DK1.
  • 3D TV: no depth cue 4 either; not holographic.
  • Desktop 3D monitor: same as 3D TV.
  • Head-tracked desktop 2D monitor (TrackIR or similar): has depth cue 4, but misses 3 and 5. Not holographic.
  • Two-zone auto-stereoscopic display: misses depth cue 4; not holographic.
  • Tupac Shakur at Coachella: just 2D video back-projected onto a transparent screen. Misses depth cues 3, 4, and 5, looks completely flat from the side. Definitely not holographic.
  • Will.I.Am on CNN: just green-screen compositing between synchronized cameras. Was completely invisible to Anderson Cooper on-stage, and missed depth cues 3, 4, and 5 when viewed via regular TV. Not even in the same ballpark.

After this list, let me reiterate my reasoning. I have shown many types of holographic displays to a lot of people, and there are two common reactions: “it’s like a hologram!,” or “it’s so much better than X,” where X is some non-holographic display system. The first reaction tells me that the term “holographic” is a good match, and the second tells me that the dividing line between holographic/non-holographic displays is a relevant one. That’s good enough for me.

That said, there are important differences between holographic displays and real holograms. Most currently-existing holographic displays don’t provide for proper accommodation, leading to accommodation/vergence conflict and decoupling, only work for a single viewer, and require some headgear (though, granted, for a large percentage of the population, even reality requires glasses to view).

But, in closing, there’s one difference that’s often brought up that’s actually not a difference: the misconception that holographic images are free-standing. A CAVE, for example, requires several large display screens to prop up the illusion of virtual objects, and I often hear “well, once we have holograms figured out, you won’t need those screens anymore” — but that’s wrong. Even real holographic images need something behind them, namely holograms (in the sense of hologram plates). In a CAVE or head-tracked 3D TV, virtual objects are cut off the moment they leave the pyramid-shaped volume between the viewer’s eyes and the screen(s), and the exact same thing is true for real holograms. If you want a life-size holographic image, you need an at least life-size hologram plate. Holographic projectors, like those familiar from science fiction (“Help me, Obi-Wan Kenobi”), are just that — science fiction. As long as we get that into everyone’s heads, we’ll be fine.

66 thoughts on “What is holographic, and what isn’t?

  1. Personally, I find it hard to swallow calling anything that doesn’t (re)create a significant portion of the lightfield “holographic”.

    Stereo displays and dynamically rendered views (the headtracked stuff) are nice and all, but I don’t think they deserve to be called “holographic”, not even when both aspects are combined; it’s not the same thing.

    Enforcing the differentiation between true holographic and holographic-ish doesn’t seem very promising when it comes to dealing the general public and PR departments though…

    • I agree completely with your first sentence. But what is a “significant portion of the lightfield?” As far as any single viewer is concerned, the only part of the light field that matters is the (tiny) part that happens to enter her pupils at any given time. To me, a display that recreates that part, dynamically in response to user motion, is close enough. This is entirely arguable, though. I believe that reapplying the term holographic this way is a practical net benefit to talking about all those varied display systems I list.

  2. I’d agree. I spent quite a bit of time trying to determine if Microsoft’s new HoloLens and Magic Leap provided accommodation. (I believe Magic Leap is Yes, and no idea on HoloLens).

    Calling every auto stereoscopic display, like the Oculus VR holographic just muddies the waters in my opinion.
    Perhaps you should label it, ‘pseudo-holographic’ ?

    Otherwise, excited to see this article.

    • Glad you brought up auto-stereoscopic displays, because I forgot to mention them. I’ll edit that; but the gist is that most auto-stereo displays would not be holographic according to the proposed definition for lack of motion parallax.

      I used to use “pseudo-holographic,” but a while ago decided to drop the “pseudo-” for conciseness. If there were real (laser-based) holographic displays available in the wild, then doing so would indeed muddy the waters, but for right now, pseudo-holographic is all there is. My rationale is that it has practical benefits to apply the modifier “real” to the one thing that won’t be available anytime soon (holograms do exist of course, but they’re not used as computer displays), instead of applying the “pseudo-” modifier to dozens or hundreds of things that do exist.

      Once real holographic computer displays become usable, this will become a problem and I will change my tune.

      • The reason I’m a stickler for accommodation is that it had always been a goal of mine to build a volumetric display that displayed all the depth cues (minus occlusion).

        You can see my rinky dink efforts here:

        I feel accommodation has very practical benefits to defining ‘real’ as well, so lumping 60’s era head-tracked stereoscopic displays with Leap Magic’s light field under the banner ‘holographic’ seems dissatisfying.


        • Fair enough. I only allow non-accommodation because otherwise there would basically be no elements in the set “holographic displays” as of now, and that wouldn’t help for discussion purposes. For the time being, saying (about Magic Leap) that it’s a holographic display that also provides accommodation is tolerable in my book. This is bound to change.

          • What are your best guesses on accommodation and hololens? I have not seen any technical writeups (hypothesized or otherwise) indicating how it creates the image yet. Dying to know more.

            There is only some extremely layman-ized description in the wired article about ‘bouncing photons’.

          • My guess is as good as anyone’s, but I’ll say no accommodation, at least not in the current prototype, and I don’t expect they’re planning for it in the near future. I think the talk about bouncing photons relates to how the light is projected from the image source onto the transparent glasses to achieve a smaller form factor.

          • Magic Leap mocked Oculus on twitter for not having accommodation, so they seem to be ready to brag about it (even if it’s limited to a few depth layers), i.e. they probably already have it working and consider it to be an important part of their tech.

      • But by then, everyone will be calling non-holographic things “holographic” and it will be harder to use being truly holographic as a selling point; making it more difficult for companies to profit from R&D in the area of truly holographic displays.

  3. Excellent write-up! I am really glad you brought up that last point about holographic images needing something “behind them” to exist. Now you’ve got me thinking about a pocket size holographic display that smokes or steams a backdrop and has projectors that cast upon it….

      • It could be done using time-multiplexed (aka “active”) stereo just like on a regular screen, and once you add head tracking, you would get a holographic display according to the proposed definition. But that would still only be for a single viewer, and functionally equivalent to existing solid-screen displays. The ability to reach through the screen is not that great a feature, once you think about it.

  4. This was an interesting article, thanks.

    There is an element of 3 that is quite hard for several of these displays?

    That is many of them are “holographic” if you hold your head level, in the exact right spot and the source content has a conveniently similar IPD.

    Do head tracked 3d tvs work through this problem?

    The dk2 does 3d quite well with live generated 3d content but pretty poorly with video sources or prerendered content even.

    • That’s correct; non-head tracked displays only show a correct view, including stereopsis, if the viewer’s eyes are precisely in the expected places (that includes correctly-configured IPD). But since viewers never hold their heads perfectly still, and the illusion breaks down with any movement, they don’t satisfy my proposed definition. It’s only a holographic display if it looks correct from a range of head positions and orientations.

      And yes, head-tracked 3D TVs (or other head-tracked display systems) take care of that. They work no matter how you tilt your head, and from any point as long as the virtual objects remain inside the pyramids formed by the eyes and the screen borders. Same’s true for real holographic images.

      That video sources and pre-rendered content don’t work is a problem with the content, not the display.

  5. Yes, I regretted muddling my question with the content issue after I submitted 🙂

    I’ve not seen you (or others) opine or hack much on lightfield cameras? Is this where video will have to go? Are the encoding tools ready for that?

  6. Aaaand this is where Doc-Ok reduces himself from highly educated man into a chump who compromises accuracy over sucking up to general public.


    Seriously, what the fuck?! Reality = hologram? VR = hologram? What happened to old Doc-Ok who would tell people they’re idiots for thinking idiotic things? Come the fuck on.

    • I proposed a definition of “holographic display” (not “hologram”), based on clearly-stated principles, and you disagree with it. That’s completely fine. No need to get your rage on.

  7. Pingback: Teledildonics, VR movies, holograms… what a roundup! | Avataric

    • I know that exists, but I need to do some more research before I opine whether it’s a practical display or not. It’s definitely and attention-catcher, I give you that.

    • Hm, if they manage to make the pulses faster, perhaps they might be able to modulate the rate to also create sound with it? (it would probably come accompanied by white noise; but I think you might still be able to recognize sounds)

    • I need to do more research on it. I’m not sure whether the technology can create a practical display, i.e., something that’s high-enough resolution and has a high-enough refresh rate. All the videos I’ve seen so far show it creating maybe a few hundred or thousand light spots, but for a 128x128x128 volumetric display, you’d need about 2 million spots at a high rate. I don’t know yet what kind of laser power would be required to turn 2 million tiny spheres of air into plasma in around 1/30s.

    • Hm, if they manage to make the pulses faster, perhaps they might be able to modulate the rate to also create sound with it? (it would probably come accompanied by white noise; but it I think you might still be able to recognize sounds)

  8. Can you clarify why DK2 doesn’t have accommodation? When I am looking around in the virtual world, if I look at something “nearby”, my eyes move together (convergence) and it seems like there should be accommodation too. Similarly for “far” away objects.

    In a CAVE, the image appears over physically separated regions, so when I look at portions of the image that are physically farther away from me, shouldn’t my lenses accommodate? Sorry for the naivete.

    • In the Rift Dk1, the screen is optically projected to be infinitely far away — all light rays coming out of the lenses towards the eye are parallel to each other (in the DK2, the screen is apparently projected to 1.3m away, but let’s ignore that for now). If you look at a far-away virtual object, your eyes turn parallel to each other (vergence), and your lenses relax for infinity focus (accommodation), and everything is fine.

      But if you look at a close-by virtual object, say something at arm’s length away, your eyes cross inward to foveate on the object, and your lenses automatically re-focus to the distance at which the virtual object appears (accommodation-vergence coupling). But the light from the screens is still optically infinitely far away, which means your eyes are focused on the wrong distance, and the object will appear blurry. After a while, your eyes adjust and relax the automatic focus reflex (accommodation-vergence decoupling), and things will look better. But the mismatch between the vergence and accommodation cues is still a problem for depth perception. Apparently, it’s a very minor problem for many people, and a worse one for some.

      In a DK2, the same thing happens, but because the optical focus distance is smaller, it matches a larger number of virtual objects, and so the problem is reduced. In a CAVE, or any other screen-based display, the same thing holds: the light actually comes from the screens, but virtual objects appear closer (or farther).

      In short, there is some form of accommodation in all displays, but in all but a very small number, it doesn’t match the other depth cues, and that’s not good.

  9. Yes 128 * 128 * 128 = 2097152. But a binary display will never ever light up completely since a completely lit up display carries no information at all. This website when rendered on my 1920×1080 display has black pixels from text on white background covering 3% of the pixels or less. A 3D display of resolution AxAxA can’t have a lot more voxels lit up at once than a 2D display of resolution AxAx1 since we can only see 2D images with our eyes. A wireframe cube would use the same amount of lines in 3D as in a typical 3D to 2D rendi (unless the backside lines up perfectly with the front in the 2D case). So a 3D display might use something like ~16% more voxels than a a 2D display would use to render this https://upload.wikimedia.org/wikipedia/commons/thumb/e/e7/Necker_cube.svg/2000px-Necker_cube.svg.png since 4 of the edges would be twice as long.

    1024 * 1024 * 1 * 3% * 116% = ~36500. A plasma voxel display of resolution 1024x1024x1024 is feasible in the very near future considering it can do “maybe a few hundred or thousand light spots” right now. It’s just an order of magnitude more which can maybe be done today with 10 of these displays.

    This is not a 3D LCD where one diode is necessary or each voxel. It is A CRT with laser instead of 1 electron beam and air instead of a surface. It doesn’t really need more acitive voxels because it’s 3D.

    3% active elements is for sparse text rendering. 20%-50% might be used for really dense text (unlikely use for this), 3D art that isn’t wireframe (you would only see the silouette clearly so unlikely to be used for public information) and floating 2D images with filled in areas. 20% for thicker text like this http://leddisplayboards.in/wp-content/uploads/2014/02/single-color-indoor-led-display.jpg and much more likely.

    1024^3 resolution assumes that lasers can accurately and precisly be aimed at the correct location at high speeds (solved years or decades ago). Resolution is not a problem at all. Color, shades and making it safe, portable and affordable for regular consumers? That is the problem.

    • [L]asers can accurately and precisly be aimed at the correct location at high speeds (solved years or decades ago).

      Do you have a link to a technical description of a mechanism that can do that? The mechanisms I’m familiar with, from LiDAR scanners or scanning laser microscopes, are based on oscillating or rotating mirrors, and while those can rapidly scan out regular rasters upwards of 1 megapixels per second, they can’t direct the laser to arbitrary locations at arbitrary times, as would be required to render only the lit-up voxels.

      By the way, I took the kind of 3D model I would like to view with a volumetric display like this, and rasterized it to 1024^3 resolution. Using only surface voxels (not interior voxels), I still ended up with 6.7 million lit-up voxels.

      3D model rasterized into a 3D grid

  10. You can get your hands on a “holographic+” grade display that doesn’t require glasses or goggles with the Voxiebox ( http://Voxiebox.com ) high resolution volumetric display – available today for rent, lease and sale! 🙂

    • Yes, I would agree, that voxie box fits the description of holographic better than some of the other auto-stereoscopic, head-mounted displays described in this article, since it provides eyeball accommodation.

  11. Pingback: Worth reading | 3D/VR/AR

  12. Pingback: Why Augmented & Virtual Reality must die and be reborn as Holograms | SimVirtua Limited

  13. Pingback: The effectiveness of minimalist avatars | Doc-Ok.org

  14. Pingback: Archaeologists use LiDAR to find lost cities in Honduras | Doc-Ok.org

  15. Pingback: On the road for VR: Microsoft HoloLens at Build 2015, San Francisco | Doc-Ok.org

    • Those appear to be lenticular light field displays, using photographs. As clear from the video, they provide perspective, occlusion, and motion parallax. We can infer that they also have binocular parallax and convergence, albeit possibly in a crude form (due to limited number of viewing zones). They might even have very crude accommodation, again limited by the (unknown) number of viewing zones.

      That means they satisfy the requirements for my definition of “holographic display.”

      • thanks. but those arent photographs; I made them. even the optical layer. just wanted “officially” legitimate the term “holographic” by a pro for them ;D

        viewing zone is ca 40-45 degree in every direction, whole thing comes maybe clearer in this example (macro from the surface):


        thanks again for your reply.

          • depends on what you call a screen; its a high-resolution ink-jet-print, glued with epoxy. when it comes to 16K-screens with ultra-fine-pixels, we will get volumetric “real” 3D-movies. I am pretty sure that stuff is already in the pipeline of the bigPlayers.

  16. Pingback: On the road for VR: Silicon Valley Virtual Reality Conference & Expo | Doc-Ok.org

  17. Pingback: On the Road for VR: Augmented World Expo 2015, Part I: VR | Doc-Ok.org

    • Not a holographic display. It can show four flat images floating in space (one visible from each side of the pyramid), but it does neither support monocular nor binocular parallax nor convergence. I’ve seen plenty of these, and they are a gimmick.

  18. Pingback: Real life Portal; a holographic window using Kinect

  19. Pingback: Tupac and HoloLens are not Holograms #tech

  20. This topic just came up a year later, and what are your thoughts on a stereoscopic, position tracked head mounted display, like Oculus.

    1) It tracks eyes (like Fovea)


    2) It had some variable focus mechanism that only had to set focus amount once per eye position, based on what the eye was pointed at?

    Would that satisfy all 6 criteria above?
    Technically it wouldn’t need to track eyes, if it rendered multiple depths sequentially, but that’s not feasible with a high frame rate.


    • In principle, yes. In detail, it would require that the variable focus mechanism can change focus at least as fast as the eye can.

      There’s another wrinkle in that eye tracking by itself can not always detect accommodation distance. Vergence is usually a reliable indicator due to accommodation/vergence coupling, but if I close one eye, I can consciously control accommodation on the other eye, without moving it. I can also let my eyes drift in and out of focus with both open and converged on some object. Granted, these are edge cases that are probably irrelevant.

      • Any idea on how quickly eyes track and focus?
        VR headsets are aiming at 90hz frame rates now, so would you estimate that to be sufficient focus update rate as well?

        I understand your edge case now.
        A simpler example may be looking through a window versus the surface of that window. No change in eye gaze, so unless the headset renders multiple depths sequentially, things will look incorrect.
        Maybe in that case a decent compromise is to always render two depths, near and far, which can be optimized based on gaze.

  21. Dude. Seriously? No. It’s not a hologram. That term is already taken. It means something else. What shall we do when we really achieve a Holographic Computer? Don’t be tempted to redefine it. Instead let’s try to make a real one.

    Let’s educate. A hologram also contains the PHASE of light. And there’s no need for glasses. Whereas all these displays like occulus & the hololens merely project light AMPLITUDE. Amplitude is the light intensity. The phase is missing. This is why (coherent) lasers are typically used to record and then play back the interference pattern taken from the object’s surface reflection. Then you get the phase. What’s this blog for, afterall?

    Microsoft could be sued for false advertising.

    You’ve laid out an argument for why a Man in the Street might consider a Black and White TV to be close enough to Colour TV, but they’re just not the same thing. When we finally do have holography up and running, and Motels advertise “Hologram TV” on their roadside billboards, you better believe that Man in The Street will be complaining HEY my room only has “Color TV”!?

    Otherwise – if we are truely the master of ‘words’ – check out my new “Time Machine”! Oh that’s just what it’s called… Like Humpty Dumpty in (Alice) Through the Looking Glass: “There’s GLORY for you!” which means “a nice knock-down argument” …

    • I never said it’s a hologram. On the contrary, I specifically explain how it’s not.

      Since you’re educating us: Would you please explain why it is an important difference that holograms contain the phase of light, given that the human eye can only detect light intensity, but not phase?

      Regarding the need for glasses: real holographic images don’t float in mid-air; they require a hologram (a medium that was exposed to an object beam and a reference beam, such as a glass plate covered in photographic emulsion) in front of or behind the holographic image (from the viewer’s point of view) to support them. If you want to have holographic images all over your environment, you have two choices: either cover your entire environment in large-area holograms, or wear small holograms right in front of your eyes, aka as glasses. Which one do you think would be more practical?

      • Do you think we will ever achieve the level of technology required to be able to take advantage of the microtexture of things in order to add directionality to images projected onto untreated environments?

      • Hey Oliver,

        “Stereoscopic” or “3D-Stereo” is a better term for this stuff. The point about “let’s educate” is you experts should explain the difference to the general public, rather than resort to loose talk. Holographic already means something more than Stereoscopic.

        Like Electronics, or Photonics, Holographics refers to a specific type of optical technology. A holographic display recreates the WAVEFRONT of light waves reflected from an object’s surface. Gabor coined the term to mean “the whole image”. That’s what it means. Asking what’s the point of recording the Phase if we can’t see it is a hint to the difference. Hmmm. Yes. Why bother? It’s there to recreate the wavefront.

        The identification of Holographic technology does not depend on perception. It’s named by what’s going on inside. The engine. We do not need to see electrons to know something is electronic. The Phase is an integral aspect to the technology. As you seem to know but ignore, it is the Phase differences (recorded as an interference pattern) which determines whether an image technology is holographic.

        Here’s an example. Imagine an image delivered by electronics, and another image delivered by fibre optics. One sends electrons along a copper wire, the other sends photons through a wave guide (glass fibre). The way each image looks is not sufficient to call it electronic, or photonic.

        This is my objection with the proposed new definition for Holographic. It’s not how it looks. It’s about the technique used to create the image. Holography is the product of lightwave interference, and that’s why Phase is fundamental. Without it, the tech is not holographic.

        Generally speaking, “Perception” is a trap. Human vision and how we see is still a mystery. The frequencies we see we interpret as colour.

        The fact a holographic plate (hologram) is completely black is a hint there is something different going on. It does not look like a photograph. The plate is used to modify a phase-coherent reference lightwave. The same one used to create the plate. The regenerated wavefronts can be viewed by many people at once. A different view to every eye. That’s the other thing. So many of these new VR/AR displays are so selfish. They work for one person, for one view. A key difference to a holographic display is it delivers different stereoscopic views for many viewers (inside the ‘frustum’) because it recreates the wavefront.

        The displays (above) listed as holographic do not recreate the original wavefront. To do this you need the phase (recorded as an interference pattern). Imagine a dome of lightwaves from each pixel!

        In terms of computer graphics, the holographic display is a lot more like a BSRDF. Instead of giving a model an RGB colour, let’s consider a ‘dome’ of reflectance around each point. That’s the magical thing about a holographic display. It can appear different from various viewing angles simultaneously. We are beginning to encode these. It’s better not to dumb it down.

        Lastly, you list 2 choices for where to place a holographic plate. Don’t be so sure. There might be more ways to do it, as yet uninvented. For a large audience it might be more practical to put plates in front of projectors. Have these project onto screens which preserve the wavefronts, in a CAVE environment. A hall of mirrors? Without glasses. Who knows. Maybe we can create the wavefronts without plates. That’s why this tech is exciting! 🙂

        best wishes! b.

        • You are entitled to your opinion, but let me reiterate my point. We generally have to make a separation between some perceivable effect, and the technology to create that effect. When I’m looking at an image on my shiny new mobile phone’s screen, I see an image. That is independent of what technology was used to create that image. Was it sent to my phone via cellular network, via WiFi, via USB, etc.? Does my phone have an LCD, or LED, or plasma, or CRT, or AMOLED screen? Those distinctions are important when discussing those parts of the technology, but irrelevant when I show someone a photo of my cat.

          In holographic imaging, there is also such a distinction in terms, but it turns out that the two sides of the coin (effect and underlying technology) use the same word: hologram. To express it in linguistic terms, hologram (technology side) and hologram (perception side) are two different words that happen to be homonyms.

          On the technology side, holograms refers to the plate, but encompasses the entire setup of lasers and mirrors, and in loose usage even the 3D holographic image. But on the user side, it refers to the effect of seeing a virtual 3D image that exhibits all the same depth cues as viewing a real object would, primarily the main ones I list in the article.

          By that reasoning, any virtual 3D image that exhibits all depth cues from many points of view (or, in other words, recreates “the whole image”), would be a hologram, regardless on whether it’s based on lasers and interference patterns. Note that the word is “hologram,” not “holointerferolaserogram.”

          For example, this: Researchers claim they’ve built the first 3D color hologram. It’s not based on holographic imaging technology, but it recreates the whole image (through clever application of a pair of parabolic mirrors). Or even simpler: what about a regular bathroom mirror? It creates a virtual 3D image that recreates the complete light wave front of a real object, including phase. Holographic?

          But back to VR/AR displays. Can I walk around a virtual object and see different sides of it? Yes. Does the virtual object stay in the sample place, and at the same size, as I walk around it? Yes. Do I get six out of the seven main depth cues (I didn’t mention aerial perspective in the article because it doesn’t apply to near objects)? Yes. Do I get correct accommodation response based on an object’s distance? No, at least not with current technology. That’s the one allowance I make. Magic Leap allegedly simulates accommodation, and near-eye light field displays have demonstrated it.

          Here is why I asked about the importance of phase: it enables a hologram (technology side) to create wave fronts, via interference, that apparently emanate from a point in 3D space that is not on the hologram itself, meaning either in front of or behind the plate. That’s how holograms replicate optical distance and simulate accommodation response, and the big difference between holograms and current AR/VR displays. The emphasis being on “current.”

          Re: two choices for hologram placement. If I want to see a (part of a) holographic image along some line of sight, I need a hologram also along that same line of sight. Do you agree or disagree with that statement? So when I want to see holographic images wherever I look, I need to be either a) completely surrounded by holograms, or b) carry a hologram or holograms with me that stay in my line of sight as I look around. Does that follow?

          Re: wavefront-preserving screens. A screen can not at the same time preserve a wavefront as required to form a holographic image, and disperse a wave front so that it is viewable from many points of view. A wavefront-preserving screen is a planar mirror. If you put a hologram in front of a projector, and project it onto a mirror, you can only see the holographic image in front of the mirror image of the original hologram. If you want to display a large holographic image for a large audience, you need a very large hologram. Screens, mirrors, or projectors won’t help you. As I mentioned earlier, holograms are not literally free-standing.

          • The logical fallacy in this argument “What is Holographic” is called Affirming the Consequent.

            In logic it goes this way: If P then Q, we assert Q, therefore P.

            A famous example is: If it rains then the street is wet. The street is wet, therefore it rained.

            We have agree If there is Holographic Technology, then we see a 3D Stereo Image. But here’s the problem. Now we see all sorts of 3D Stereo Imagery, and conclude therefore it is Holographic. This argument is false. It affirms the consequent.

            For many years a popular way people saw 3D Stereo Images was from Holography. So in the absence of other methods, a correlation has been learned. However, now there are other methods to generate 3D Stereo (‘Q’), but that does not mean they are (‘P’) Holographic. The street has become wet another way. It is not rain.

            Regarding: “[…] I see an image. That is independent of what technology was used to create that image.” No, the image is not independent. It is dependent. Remove the technology, and the image goes away. The image is entirely dependent on the technology. Turn it off, it’s gone. Moreso, different types of technology generates different types of image.

            Much of the new technology listed above creates a 3D Stereo view for one person, for one view. They wet the street in various manners, but they’re not rain. What i find far more interesting is the 3D Stereo effect delivered to many people, all at once, each with a different appearance. We don’t need glasses. I can place a mirror in the lightfield and see something else. I can swipe a clear perspex plate through the lightfield and see the light diffract in a slice. These awesome qualities are missing, typically because the phase is missing (not recorded). In most cases it’s just two tv’s. The experts in VR ought to be saying so. Respond to the hype with education. They are not a result of holography.

            In quick reply to the other two points that came up. Yes, you need to be standing in the path of the lightfield to experience the wavefronts. Such is the nature of “line of sight”. No sense looking away, huh? And secondly, yes a planar mirror will reflect a wavefront in the usual manner in which we are all already familar. These questions feel like traps. Yes, a full surround holographic image would be hard to achieve, which is probably why no one has done it this way. Well, not without lots of smoke hehe. Or yes many many holographic plates.

            However, listing the limits of holography doesn’t change the meaning of “what is holographic, and what isn’t” (the original topic).

            We should respect the term invented by Gabor. He did win a Nobel Prize for it. Perhaps we need a different more generic term for this wet street? Oh. Hang on. How about VR? 😉
            Virtual Reality! Augmented Reality!

            okreylos, I think you were testing out an idea. Fishing for responses? I hope we make something interesting and useful. Maybe there is a new term needed, but let’s not use ones already in play, like electronic, photonic, and holographic.

Please leave a reply!