Fighting black smear

Now that I’ve gotten my Oculus Rift DK2 (mostly) working with Vrui under Linux, I’ve encountered the dreaded artifact often referred to as “black smear.” While pixels on OLED screens have very fast switching times — orders of magnitude faster than LCD pixels — they still can’t switch from on to off and back instantaneously. This leads to a problem that’s hardly visible when viewing a normal screen, but very visible in a head-mounted display due to a phenomenon called “vestibulo-ocular reflex.”

Basically, our eyes have built-in image stabilizers: if we move our head, this motion is detected by the vestibular apparatus in the inner ear (our “sense of equilibrium”), and our eyes automatically move the opposite way to keep our gaze fixed on a fixed point in space (interestingly, this even happens with the eyes closed, or in total darkness).

Figure 1: Black smear. It’s kinda like that.

In an HMD, this has the following effect: imagine a scene with a bright object in front of a dark background, and imagine looking at the bright object. Now rotate your head to the right. The VR software will detect that motion, update its rendering parameters, and draw the bright object further to the left on the HMD’s screen. At the same time, your vestibular system will detect the same motion, and your eyes will automatically track the bright object (if the VR software is properly calibrated to the HMD; otherwise: simulator sickness). However, as the object moves on the screen, pixels at the object’s left edge that used to be background-colored have to switch to object-colored, while subjected to your full visual scrutiny. Due to non-zero switching times, those pixels will not immediately have the full brightness of the object, but will appear darker. Hence, “black smear.” The opposite happens at the object’s right edge, where it will leave a trail on the background (“white smear”), but a much less obvious one due to asymmetric response times (black-to-white is slower than white-to-black). The reason why black smear (and white smear) is not a problem on regular OLED displays is the lack of vestibulo-ocular coupling — if you don’t strap the screen to your face, you won’t see it.

Fortunately, some smart cookie at Oculus had a brilliant idea for a fix: if a pixel has to increase brightness from one frame to the next, artificially “overdrive” its brightness in the next frame. The pixel won’t reach that overdrive brightness due to response time, but it will reach a somewhat lower brightness, which just so happens (well, if the overdrive factor is carefully chosen) to coincide with the brightness the VR software wanted in the first place. This mostly solves the problem, and it’s very easy to implement. First, during the runtime of a VR program, keep a copy of the most-recently rendered frame around as a color image. Then, on each frame, during the post-processing step that corrects for lens distortion and chromatic aberration, calculate the difference in brightness between the new value of a pixel and its previous one, and apply two overdrive factors, one for increased brightness, and one for reduced brightness. In pseudo-code:

Color newValue=undistortCurrentFrame(pixelPosition);
Color oldValue=previousFrame(pixelPosition);
if(newValue>oldValue)
    newValue=newValue+(newValue-oldValue)*upFactor;
else
    newValue=newValue+(newValue-oldValue)*downFactor;

The larger/smaller test, and the scaling, is applied to each component of the color individually. In the Oculus Rift DK2, upFactor is 0.1, and downFactor is 0.05.

There’s just one problem (well, actually there are two): what if the old pixel value is, say, 1.0, and the desired new value is, say, 0.0? In that case, overdrive correction will calculate a new pixel brightness of 0.0+(0.0-1.0)*0.05 = -0.05. Oops. There is no such thing as negative light, which means the new brightness will be clamped to 0.0, and white smear is back. In the other direction, with the old value 0.0 and the desired new value 1.0, overdrive yields 1.0+(1.0-0.0)*0.1=1.1, which will be clamped to 1.0 because that’s the maximum representable brightness, and black smear is back.

A solution to this little problem could be to avoid pure blacks and pure whites in VR applications, but it’s hard to know for a developer ahead of time how much, if any, contrast has to be sacrificed, and the problem doesn’t just apply to pure whites, but also to pure reds or greens or blues, and, oh, it’s messy. This should all be taken care of under the hood, by the lens correction shader.

Fortunately, that’s easy, too, if the user is willing to sacrifice some small amount of contrast. Imagine that we add a contrast-reduction filter somewhere along the rendering pipeline that linearly transforms the full brightness range of [0, 1] to some reduced brightness range [min, max]. That is really easy:

newColor=newColor*(max-min)+Color(min);

But what values should be chosen for min and max? We want to avoid any clamping during the overdrive calculation, and what are the worst possible cases?

The first case is where a pixel’s previous value (after overdrive has been applied) is 0.0, and the new desired value is max. In that case, overdrive will calculate a new brightness of max+(max-0.0)*upFactor, and ideally the result of that should be 1.0. Simple algebra yields max+(max-0.0)*upFactor=1.0 <=> max*(1.0+upFactor)=1.0 <=> max=1.0/(1.0+upFactor), or in the Rift DK2’s case, max=0.9091.

In the other direction, where the old post-overdrive value is 1.0 and the new desired value is min, we get min+(min-1.0)*downFactor=0.0 <=> min*(1.0+downFactor)-downFactor=0.0 <=> min*(1.0+downFactor)=downFactor <=> min=downFactor/(1.0+downFactor), or in the Rift DK2’s case, min=0.0476.

The result of this adjustment is almost total disappearance of black or white smear, at the cost of a small contrast reduction of 1.0-(0.9091-0.0476)/1.0=13.85%. As it turns out, that loss is hardly noticeable in all applications I’ve tried, but the lack of smear is. Nonetheless, contrast reduction should be a user-configurable parameter. See Figure 2 for a before/after comparison. Of course, Figure 2 won’t show the lack of smear; even if it were an animated GIF, you’d still have to bolt your monitor to your face to see that.

Figure 2: Comparison between original (left) and contrast-reduced image (right). The contrast reduction factors in the right image were automatically calculated based on the overdrive factors for the Oculus Rift DK2’s screen, upFactor=0.1 and downFactor=0.05.

11 thoughts on “Fighting black smear

  1. Congrats! That’s some excellent work right there 🙂
    I’ll sacrifice some global contrast any day to counter an eyesore artefact like black smear.
    Lets hope the Oculus SDK implements this fix asap.

    • I’m not sure, but it might already be in there. At least I’ve heard reports that some games do it, I think Elite: Dangerous was thrown around, but it’s unclear whether it’s happening in the game itself or in the SDK. I haven’t looked closely at Oculus’ SDK yet, for lack of a Linux version.

  2. Firstly, great article, you’re a champ. Now the questions: You said the oled will not immediately have the required brightness and I’m guessing because of low persistence, you want it to switch on to the correct value and then switch off anyway. So in the case of full persistence, will the pixel eventually reach the target brightness, in which case, will you have to adjust its brightness scale again?

    Secondly, here’s an idea I had about getting around that. You’re best placed to know if I’m missing out something obvious. Can we use an LCD shutter screen in front of the oled screen to block the oled and give it another ms or so to reach it’s target brightness and then open the shutter for the precise 1-2 ms of low persistence etc. Will this impact time warp as we’re adding another ms of latency to the motion to photons. I just reckon having the option to black out the screen while drawing to the front buffer might be a useful midpoint before we can race the beam effectively.

    • To your first point: low persistence vs full persistence definitely has some effect on pixel response, but I’m not enough of an expert to go into details. I’d guess that an OLED reaches full saturation much quicker than the 13.333ms light-up time of a full-persistence display, so the effect would be negligible to non-existent in that case.

      To your second point: LCD shutters have worse response times than OLED pixels. I’m guessing again, but if you were to open an LCD shutter for only 1-2ms, you’d probably not get more than 10% of the light through. We have high-quality shutters in our Stereographics shutter glasses, and they work maybe as far up as 120Hz, which is ~8ms switch time.

      • Would a mechanical shutter, with a fast spinning polariser filter (or perhaps two per eye, so they can counter-rotate to increase the rate) be reasonable option?

  3. Pingback: On the road for VR: Silicon Valley Virtual Reality Conference & Expo | Doc-Ok.org

  4. I used a DK2 in February 2016 with some aerobatics ride-along demo, and there was serious black smear. So I guess it was not implemented in the SDK, or perhaps that application was old? At the time I thought it was a hardware issue (and I guess it is, though one that is software-correctable). It was my first taste of VR and I was quite disappointed.

Please leave a reply!