Good stereo vs. bad stereo

I received an email about a week ago that reminded me that, even though stereoscopic movies and 3D graphics have been around for at least six decades, there are still some wide-spread misconceptions out there. Those need to be addressed urgently, especially given stereo’s hard push into the mainstream over the last few years. While, this time around, the approaches to stereo are generally better than the last time “3D” hit the local multiplex (just compare Avatar and Friday the 13th 3D), and the wide availability of commodity stereoscopic display hardware is a major boon to people like me, we are already beginning to see a backlash. And if there’s a way to do things better, to avoid that backlash, then I think it’s important to do it.

So here’s the gist of this particular issue: there are primarily two ways of setting up a movie camera, or a virtual movie camera in 3D computer graphics, to capture stereoscopic images — one is used by the majority of existing 3D graphics software, and seemingly also by the “3D” movie industry, and the other one is correct.

Toe-in vs skewed frustum

So, how do you set up a stereo camera? The basic truth is that stereoscopy works by capturing two slightly different views of the same 3D scene, and presenting these views separately to the viewers’ left and right eyes. The devil, as always, lies in the details.

Say you have two regular video cameras, and want to film a “3D” movie (OK, I’m going to stop putting “3D” in quotes now. My pedantic point is that 3D movies are not actually 3D, they’re stereoscopic. Carry on). What do you do? If you put them next to each other, with their viewing directions exactly parallel, you’ll see that it doesn’t quite give the desired effect. When viewing the resulting footage, you’ll notice that everything in the scene, up to infinity, appears to float in front of your viewing screen. This is because the two cameras, being parallel, are stereo-focused on the infinity plane. What you want, instead, is that near objects float in front of the screen, and that far objects float behind the screen. Let’s call the virtual plane separating “in-front” and “behind” objects the stereo-focus plane.

So how do you control the position of the stereo-focus plane? When using two normal cameras, the only solution is to rotate both slightly inwards, so that their viewing direction lines intersect exactly in the desired stereo-focus plane. This approach is often called toe-in stereo, and it sort-of works — under a very lenient definition of the words “sort-of” and “works.”

The fundamental problem with toe-in stereo is that it makes sense intuitively — after all, don’t our eyes rotate inwards when we focus on nearby objects? — but that our intuition does not correspond to how 3D movies are shown. 3D (or any other kind of) movies are not projected directly onto our retinas, they are projected onto screens, and those screens are in turn viewed by us, i.e., they project onto our retinas.

Now, when a normal camera records a movie, the assumption is that the movie will later be projected onto a screen that is orthogonal to the projector’s projection direction, which is implicitly the same as the camera’s viewing direction (the undesirable effect of non-orthogonal projection is called keystoning). In a toe-in stereo camera, on the other hand, there are two viewing directions, at a slight angle towards each other. But, in the theater, the cameras’ views are projected onto the same screen, meaning that at least one, but typically both, of the component images will exhibit keystoning (see Figures 1 and 2).

Figure 1: The implied viewing directions and screen orientations caused by a toe-in stereo camera based on two on-axis projection cameras. The discrepancy between the screen orientations implied by the cameras’ models and the real screen causes keystone distortion, which leads to 3D convergence issues and eye strain.

Figure 2: The left stereo image shows the keystoning effect caused by toe-in stereo. A viewer will not be able to merge these two views into a single 3D object. The right stereo image shows the correct result of using skewed-frustum stereo. You can try for yourself using a pair of red/blue anaglyphic glasses.

The bad news is that keystoning from toe-in stereo leads to problems in 3D vision. Because the left/right views of captured objects or scenes do not actually look like they would if viewed directly with the naked eye, our brains refuse to merge those views and perceive the 3D objects therein, causing a breakdown of the 3D illusion. When keystoning is less severe, our brains are flexible enough to adapt, but our eyes will dart around trying to make sense of the mismatching images, which leads to eye strain and potentially headaches. Because keystoning is more severe towards the left and right edges of the image, toe-in stereo generally works well enough for convergence around the center of the images, and generally breaks down towards the edges.

And this is why I think a good portion of current 3D movies are based on toe-in stereo (I haven’t watched enough 3D movies to tell for sure, and the ones I’ve seen were too murky to really tell): I have spoken with 3D movie experts (an IMAX 3D film crew, to be precise), and they told me the two basic rules of thumb for good stereo in movies: artificially reduce the amount of eye separation, and keep the action, and therefore the viewer’s eyes, in the center of the screen. Taken together, these two rules exactly address the issues caused by toe-in stereo, but of course they’re only treating the symptom, not the cause. As an aside: when we showed this camera crew how we are doing stereo in the CAVE, they immediately accused us of breaking the two rules. What they forgot is that stereo in the CAVE obviously works, including for them, and does not cause eye strain, meaning that those rules are only workarounds for a problem that doesn’t exist in the first place if stereo is done properly.

So what is the correct way of doing it? It can be derived by simple geometry. If a 3D movie or stereo 3D graphics are to be shown on a particular screen, and will be seen by a viewer positioned somewhere in front of that screen, then the two viewing volumes for the viewer’s eyes are exactly the two pyramids defined by each eye, and the four corners of the screen. In technical terms, this leads to skewed-frustum stereo. The following video explains this pretty well, better than I could here in words or a single diagram, even though it is primarily about head tracking and the screen/viewer camera model:

In a nutshell, skewed-frustum stereo works exactly as ordered. Even stereo pairs with very large disparity can be viewed without convergence problems or eye strain, and there are no problems when looking towards the edge of the image.

To allow for a real and direct comparison, I prepared two stereoscopic images (using red/blue anaglyphic stereo) of the same scene from the same viewpoint and with the same eye separation, one using toe-in stereo, one using skewed-frustum stereo. They need to be large and need to be seen at original size to appreciate the effect, which is why I’m only linking them here. Ideally, switch back-and-forth between the images several times and focus on the structure close to the upper-left corner. The effect is subtle, but noxious:

Good (skewed-frustum) stereo vs bad (toe-in) stereo.

I generated these using the Nanotech Construction Kit and Vrui; as it turns out, Vrui is flexible enough to support bad stereo, but at least it was considerably harder setting it up than good stereo. So that’s a win, I guess.

There are only two issues to be aware of: for one, objects at infinity will have the exact separation of the viewer’s eyes, so if the programmed-in eye separation is larger than the viewer’s actual eye separation, convergence for very far away objects will fail (in reality, objects can’t be farther away than infinity, or at least our brains seem to think so). Fortunately, the distribution of eye separations in the general population is quite narrow; just stick close to the smaller end. But it’s a thing to keep in mind when producing stereoscopic images for a small screen, and then showing them on a large screen: eye separation scales with screen size when baked into a video. This is why, ideally, stereoscopic 3D graphics should be generated specifically for the size of the screen on which they will be shown, and for the expected position of the audience.

The other issue is that virtual objects very close to the viewer will appear blurry. This is because when the brain perceives an object to be at a certain distance, it will tell the eyes to focus their lenses to that distance (a process called accommodation). But in stereoscopic imaging, the light reaching the viewer’s eyes from close-by virtual objects will still come from the actual screen, which is much farther away, and so the eyes will focus on the wrong plane, and the entire image will appear blurry.

Unfortunately, there’s nothing we can do about that right now, but at least it’s a rather subtle effect. In our CAVE, users standing in the center can see virtual objects floating only a few inches in front of their eyes quite clearly, even though the walls, i.e., the actual screens, are four feet away. This focus miscue does have a noticeable after-effect: after having used the CAVE for an extended period of time, say a few hours, the real world will look somewhat “off,” in a way that’s hard to describe, for a few minutes after stepping out. But this appears to be only a temporary effect.

Taking it back to the real 3D movie world: the physical analogy to skewed-frustum stereo is lens shift. Instead of rotating the two cameras inwards, one has to shift their lenses inwards. The amount of shift is, again, determined by the distance to the desired stereo-focus plane. Technically, creating lens-shift stereo cameras should be feasible (after all, lens shift photography is all the rage these days), so everybody should be using them. And some 3D movie makers might very well already do that — I’m not a part of that crowd, but from what I hear, at least some don’t.

In the 3D graphics world, where cameras are entirely virtual, it should be even easier to do stereo right. However, many graphics applications use the standard camera model (focus point, viewing direction, up vector, field-of-view), and can only represent non-skewed frusta. The fact that this camera model, as commonly implemented, does not support proper stereo, is just another reason why it shouldn’t be used.

So here’s the bottom line: Toe-in stereo is only a rough approximation of correct stereo, and it should not be used. If you find yourself wondering how to specify the toe-in angle in your favorite graphics software, hold it right there, you’re doing it wrong. The fact that toe-in stereo is still used — and seemingly widely used — could explain the eye strain and discomfort large numbers of people report with 3D movies and stereoscopic 3D graphics. Real 3D movie cameras should use lens shift, and virtual stereoscopic cameras should use skewed frusta, aka off-axis projection. While the standard 3D graphics camera model can be generalized to support skewed frusta, why not just replace it with a model that can do it without additional thought, and is more flexible and more generally applicable to boot?

Update: With the Oculus Rift in developers’ hands now, I’m getting a lot of questions about whether this article applies to head-mounted displays in general, and the Rift specifically. Short answer: it does. There isn’t any fundamental difference between large screens far away from the viewer, and small screens right in front of the viewer’s eyes. The latter add a wrinkle because they necessarily need to involve lenses and their concomitant distortions so that viewers are able to focus on the screens, but the principle remains the same. One important difference is that small screens close to the viewer’s eyes are more sensitive to miscalibration, so doing stereo right is, if anything, even more important than on large-screen displays. And yes, the official Oculus Rift software does use off-axis projection, even though the SDK documentation flat-out denies it.

21 thoughts on “Good stereo vs. bad stereo

  1. Thanks for the excellent write-up, I knew toe-in was the wrong way to go about stereoscopic 3D but wasn’t so sure quite -why- this was. The article + keystoning images cleared that up perfectly.

    Also, you refer to skewed frusta, any other amateurs (like myself) might have seen these referred to as “asymmetric” rather than skewed – example code to set up such frusta in OpenGL can be found here:
    http://www.orthostereo.com/geometryopengl.html

    Cheers!

    • You’re very welcome, that’s what I was hoping to achieve. And you’re right, I should have double-checked the nomenclature. I’ve always referred to these as “skewed frusta,” so thanks for pointing out that others often call them “asymmetric frusta.” That reminds me that another term I’ve heard is “off-axis frustum,” as in off-axis projection. I should add a glossary page to the blog to keep track of all these. It’s only going to get worse.

      Thanks as well for the link you provided. I should have added some links myself, but I forgot, being a blogging noob. Here’s another one that I think explains it really well: Calculating Stereo Pairs by Paul Bourke.

      EDIT: And I just realized my toe-in stereo diagram is a total rip-off of Paul’s. My feeble excuse is that there are only so many ways to draw that diagram, and that using red/blue to indicate the eyes in a stereo pair is standard, coming from red/blue anaglyphic glasses. Still, oopsie.

  2. thanks to people like you that all newbie have the opportunity to understand something they would like to know but don’t know where to find it. lista de emails lista de emails lista de emails lista de emails lista de emails

  3. Thanks for the great article! I’ve been interested in stereoscopic projection for a while so it was fascinating to read about a problem I hadn’t considered before. You explain why ‘toe-in’ stereo is so bad very clearly – and I’ll be doing my best to avoid it if I ever work in the field myself.

  4. This isn’t as simple as distorting badly generated left and right images using anamorphic projection, is it?

    • Hm, i think i might’ve forgotten to check the the checkbox to receive email notifications of replies…

      • Well, proper stereo is nothing else than computational trompe l’oeil for two eyes, so the entire trick is to get your anamorphic projection right in the first place. Once the stereo pairs are poorly generated, post-correcting them will introduce the same bad aliasing artifacts you get from projectors’ digital keystone correction — unless you actually project them onto two separate screens that are at an angle towards each other, like in Figure 1.

        The good news is that setting it up properly is extremely easy, especially when you’re using a screen+viewer camera model instead of the standard one. Then it’s actually hard to get it wrong. 🙂

  5. Pingback: The reality of head-mounted displays | Doc-Ok.org

  6. Pingback: An off-topic post | Doc-Ok.org

  7. Pingback: VR Expert to Oculus Rift Devs: Make Sure You're Doing 3D Right

  8. Pingback: Gemischte Links 26.04.2013 | 3D/VR/AR

  9. Pingback: Will the Oculus Rift make you sick? | Doc-Ok.org

  10. Pingback: The Holovision Kickstarter “scam” | Doc-Ok.org

  11. Pingback: VR Movies | Doc-Ok.org

  12. Hi
    I read an old interesting post of your on the seven depth cues relative to the Oculus Rift system – you wrote that given that all image is at infinity, the accomodation cue is not granted.
    Then you added a line in which you wrote:

    “But there’s more. Cue 7 not only doesn’t work, but it contradicts the other 6 depth cues. If a virtual object is up close, 6 cues tell the brain it’s up close, but accomodation still tells the brain it’s at infinity. This is not a big problem because accomodation is weak compared to the others. The end effect of projecting everything at infinity is that near objects will appear blurry to the viewer. It’s exactly like being far-sighted.”

    I do not understand why this happens – do you mean that as an effect of the contradction the brain will make the “close” objects blurry even if they are no blurry in terms of retinal image (the accomodation is kept at infinity, given that the real image is at infinity)?
    I.e. blurriness is fabricated by the brain to compensate for the contradiction?
    Thanks. Best
    Alessio

    • The problem is accommodation-vergence coupling, a learned reflex that helps our eyes adapt focus quickly. The moment we cross our eyes to look at an object at a given distance, the lenses will immediately accommodate to the distance of that object based on the eyes’ vergence angle. The same reflex applies in VR, but there it doesn’t work, because optical distance is not coupled to object distance, but to the distance to the virtual screen.

      Concretely, if a virtual object is at infinity and the eyes focus on it, the object will appear sharp in an infinity-focused HMD because the light really is coming from infintely far away. But if a virtual object is near, the eyes will focus on near distance, and the object will appear blurry because the light still comes from infinitely far away.

  13. Hi, thank you so much for the article. It helped a lot. Looking a the skewed frusta diagram it looks like the two cameras are/should be parallel, how does that effect positive and negative parallax? Is just having the skewed cameras line up at screen depth enough or should the cameras also angle in to a converging point?

    Thanks again for the article,
    Matt

    • The way the diagram shows, where the two cameras’ view frusta intersect exactly on the screen rectangle, and their respective front and back planes are parallel to the screen, yields an orthostereo setup where virtual objects appear exactly where they should, in front of or behind the screen.

      Another way of thinking about this is that the camera’s optical axes are rotated to converge to a point on the screen, but the camera’s imaging planes are rotated outwards by the same angle so that they are parallel to the screen again.

      There are detailed explanations of orthostereoscopy and its pseudo-holographic qualities in “How Head Tracking Makes Holographic Displays” and “How Does VR Create the Illusion of Reality?”

Leave a Reply

Your email address will not be published. Required fields are marked *