On the Road for VR: Augmented World Expo 2015, Part I: VR

I attended the Augmented World Expo (AWE) once before, in 2013 when I took along an Augmented Reality Sandbox. This time, AWE partnered with UploadVR to include a significant VR subsection. I’m going to split my coverage, focusing on that VR component here, while covering the AR offering in another post.

eMagin 2k×2k VR HMD

eMagin’s (yet to be named) new head-mounted display was the primary reason I went to AWE in the first place. I had seen it announced here and there, but I was skeptical it would be able to provide the advertised field of view of 80°×80°. Unlike Oculus Rift, HTC/Valve Vive, or other post-renaissance HMDs, eMagin’s is based on OLED  microdisplays (unsurprisingly, with microdisplay manufacture being eMagin’s core business). Previous microdisplay-based HMDs, including eMagin’s own Z800 3DVisor, were very limited in the FoV department, usually topping out around 40°. Magnifying a display that measures around 1cm2 to a large solid angle requires much more complex optics than doing the same for a screen that’s several inches across.

Figure 1: eMagin’s unnamed 2k x 2k, 80×80 degree FoV, VR HMD with flip-up optics.

Well, turns out I was wrong and eMagin’s HMD works as advertised. I got to try the HMD, and have an extended chat with project manager Dan Cui, over breakfast. So, what’s it like in detail? The first thing to understand is that the device on display was a pre-prototype and not a development kit or consumer product. It was quite heavy (around 1lb?), with most weight in the front; didn’t have any built-in head tracker; didn’t have an overhead strap; didn’t have the final screen (more on that below). Dan mentioned the release goal for a development kit (if not a final product) being fourth quarter of 2016.

With that in mind, the device’s optics were already very impressive, and the overall form factor and design were something I’d wear in public (if it had an AR mode, of course) — it’s very cyber-punkish. Let’s start with the display system. It’s a 2k×2k OLED microdisplay panel (0.63″×0.63″) per eye, with a high pixel fill rate (no screen door effect). In the current prototype, the screens are running at 60Hz (fed via a single DisplayPort 1.2 connection) in full persistence mode. According to Dan, low persistence mode down to 1ms on-time is possible, 85Hz are possible without major changes to the electronics, and 120Hz are possible with the currently used OLED panels.

The biggest problem with the current panels is that they don’t have blue sub-pixels (to my shame, I didn’t notice this during my first test, as it was set in a virtual environment with very warm yellowish-reddish colors). This obviously needs to be changed, but Dan assured me that full RGB panels are already being produced, and will be integrated as soon as possible. I will have to re-evaluate the screen quality at that time, as blue OLEDs are still harder to manufacture than red or green ones, and might change the screens’ subpixel layouts. In other words, perceived screen resolution and/or pixel fill factor and therefore screen door effect might change. But given eMagin’s expertise in producing OLED panels, I’m optimistic.

The rest of the optical system is already top-notch. The custom-ground three-element lens system creates a relatively wide field of view: 80°×80° advertised, and it does in fact look to be only a bit narrower than that of Oculus Rift DK2, but I would have to compare side-by-side for a final judgement (and Dan mentioned FoV might increase a bit with future versions). There is moderate geometric distortion (the demo software didn’t yet have lens distortion correction), but I didn’t notice chromatic aberration (granted, the lack of a blue color component would make it harder to detect). The screen was in focus all across, which was helped by the physical IPD adjustment and per-eye focus adjustment. The current prototype does not have a sensor to send actual IPD values to the driving software (feature request!), and the lenses’ focus can only by adjusted while they are partially flipped up, making it a bit harder (another feature request: focus dials accessible from outside!). Oh, that’s right: the lens/screen assembly can be flipped up, meaning it’s possible to get a quick look at the real world with no hassle.

But the biggest break-through is clearly the screen. There is a big difference in resolution compared to Oculus Rift DK2 and even GearVR. Assuming that Rift CV1 and Vive have 1080×1200 screens and around 100°×100° FoV, this HMD has more than twice the angular resolution at around 25 pixels/°, the approximate equivalent of 20/40 vision.

Figure 2: eMagin’s unnamed 2k x 2k, 80×80 degree FoV VR HMD, with head for scale. Image from UploadVR.

All told, I’m very excited to get my hands on this HMD once it reaches dev kit stage, meaning, once it has a full RGB screen, ≥85Hz low-persistence display, integrated head tracker at at least Rift DK1 quality, built-in or add-on positional tracking sensors or LEDs, and some improvements to ergonomics (lower total weight, improved weight distribution, top strap). There is no official word on price at this point, but Dan hinted that initial versions would be on the prosumer side, but that once mass production kicks in, the final product could be competitive with other headsets. Let’s just say, I’d be willing to fork over quite a bit of cash for one early on. I have certain architectural applications in mind.

castAR

Figure 3: castAR glasses and wand.

I also had the opportunity to try castAR for the first time. I’ve been quite curious about it, as there’s a lot of hype and bad information going around. First off, a gripe: “castAR” is a misnomer, it should have been “castVR.” For me, AR is about seamlessly inserting virtual objects into real environments, either via pass-through video or see-through headsets such as Microsoft’s HoloLens. castAR doesn’t do either of that: it shows virtual objects in front of a retro-reflective screen which occludes any real environment behind it. That’s “fish-tank” or “reach-in” VR, and functionally identical to CAVEs or other screen-based/projected VR displays (see Figure 4).

Figure 4: This is not Augmented Reality. (It’s not castAR, either, but that’s besides the point.)

Anyway, that’s semantics. So what about castVRAR? Turns out it’s quite good. I was skeptical about several details: stereo quality, tracking quality, brightness, field of view, and interaction quality. Before going into those, a brief explanation of how it works, as it’s very different from head-mounted VR a la Oculus Rift. castAR is also head-mounted, but instead of one or two screens, it has two tiny projectors right next to the viewer’s left and right eyes, respectively. The projectors throw their images straight out front, where the light hits a mat of retro-reflective material which throws it right back into the viewer’s eyes. As the projectors are very close to the eyes, castAR does not need to know exactly where the reflective mat is to set up proper projection matrices to show virtual objects. When combined with some form of 6-DOF head tracking, castAR can create the illusion of solid free-standing virtual objects floating anywhere between the viewer’s eyes and the reflective map (in other words, castAR qualifies as a holographic display).

My main concern was stereo quality, as in amount of cross-talk between the eyes. The retro-reflective mat will reflect light from the left-eye projector primarily back into the left eye (and same for the right eye), but the material is not perfect, and I was expecting noticeable cross-talk. What I did not know is that castAR uses polarization to separate left/right images as well (conveniently, the retro-reflective material is metallic and retains polarization), which leads to almost perfect stereo quality.

The second question was tracking. castAR uses optical tracking, based on two cameras in the headset (not clear if these are two separate cameras for larger field of view, or one stereo camera), and an active IR LED array. It is essentially an inside-out version of Oculus’ optical tracking system as of DK2, or a high-end version of the Wiimote’s camera-based tracker. The tracking system is serviceable, but not great. There was significant jitter (probably on the order of several mm to a cm) and noticeable lag, leading to virtual objects wobbling above the reflective mat instead of being locked in place. While annoying, it’s important to point out that this would not necessarily lead to simulator sickness as in HMD-based VR, as the virtual objects only occupy a relatively small part of the user’s visual field. I asked about tracking, and a castAR representative confirmed that tracking is currently purely optical, without sensor fusion with an inertial measurement unit. Even so, I’ve seen better pure optical tracking, given the short tracking distance and the large IR beacon (about 5″×5″).

The single input device is tracked in the same way, via two forward-facing cameras and the same IR beacon. While the same amount of jitter is less noticeable for input devices, the cameras’ limited field of view was more of a problem: obviously, tracking only works while the wand is pointed in the general direction of the IR beacon. This is an issue when trying to pick up virtual objects and turning them: they will stop responding once the wand is rotated away, which happened quite often to me as I’m used to an interaction style based on full orientation tracking (see this video, for example). One of castAR’s demos was a light saber combined with a Jenga-like tower of bricks continually growing out of the table, and that one worked well because the application naturally forced the user to point the wand at the beacon. For general applications, as with head tracking, sensor fusion with an IMU would probably improve usability a lot.

On the upside, I was impressed by the projectors’ brightness and contrast. I had expected the image to be dim, but by virtue of the retro-reflective map, almost no light is lost on the way from the projectors to the user’s eyes, and virtual objects appeared bright and solid even in the brightly-lit showroom. Definitely a positive surprise. One minor letdown was the projectors’ limited field-of-projection. The effective field-of-view of castAR from the user’s perspective is limited by two factors: the size of the reflective mat (anything not between the mat and the user’s eyes is cut off), and the maximum projection angle of the projectors. I did not consider the latter, coming from a screen-based VR background. I asked, and castAR’s projection field-of-view is 70°, which means virtual objects do get cut off well inside the viewer’s natural field-of-view, but in practice it didn’t bother me too much.

Finally, an observation about calibration. castAR’s software was drawing a graphical representation of the wand, but that virtual wand was offset by several inches relative to the real wand. This should not happen, given that both are tracked based on the same IR beacon. I’m wondering whether this is due to the projectors not being exactly co-located with the viewer’s eyes, and therefore still needing to know the plane equation of the reflective mat to set up proper projection parameters.

All in all, castAR compares well to other screen-based VR systems. It has very good stereo quality (better than LCD-based 3D TVs) and good-enough brightness and contrast, and it’s easy to make the display area very large by tiling cheap retro-reflective mats. It’s conceivable to cover an entire room in those, for a full-solid-angle CAVE-like display, at which point one is only limited by the projectors’ 70° maximum angle. castAR’s big benefit over other screen-based VR displays, including CAVEs, is that the same display space can be used by multiple users wearing their own headsets, and seeing their own properly-projected images. This is a big deal for collaborative work. The retro-reflective material will probably reduce cross-talk between multiple headsets below noticeable levels (I was not able to test this). Oh, and it’s probably a lot cheaper than other screen-based systems. If they fix their 6-DOF tracking, I’m sold.

FOVE

I got a very brief and superficial demonstration of FOVE, “the world’s first eye tracking virtual reality headset” (not quite true; make that “the first consumer eye tracking VR headset,” but whatever). All I can say based on this two-minute demo is that it’s a working HMD (no positional tracking at this point), and that eye tracking worked well enough inside the center of the screen area, but got quite imprecise towards the edges. The demo included a brief calibration step at the beginning, where I was asked to look at a series of bright dots. After that it was a graphically very busy look&shoot game, making it hard to evaluate the screen’s resolution and quality.

Figure 5: FOVE, “the world’s first eye tracking virtual reality headset.”

I noticed one strange effect, though: view-based aiming broke down entirely when I lowered my head to look at enemies flying below my position. I can only imagine that this was due to a bug in the 3D ray calculations inside the game itself.

SMI’s eye-tracked Oculus Rift DK2 / AltspaceVR

I got a more involved demonstration of the other world’s first eye tracking virtual reality headset: SMI’s eye tracking upgrade for the Rift DK2. I had previously tested SMI’s eye tracking upgrade for the Rift DK1. This new one is a lot more refined: no longer is it necessary to cut a rectangular hole into the lenses; the new tracking cameras are behind the lenses, where they’re out of the way. From a functional point of view, there doesn’t seem to be much improvement from the previous to the new version. Even after full calibration, there was still enough mismatch between real and sensed gaze direction that I could not use one of the demo applications without bringing the objects I was supposed to look at into center view first, somewhat defeating the purpose of eye tracking. In one demo I was supposed to look at a certain picture in a virtual gallery, and because I was sitting in a chair and the image was behind me I could not turn around and face it directly. I had to look at it close to the edge of the screen, and had to consciously look past the object to select it (there was no direct feedback, so I had to use some trial&error to find the correct spot).

This might be due to me wearing contact lenses. SMI’s tracking takes the shape of the eyeball and cornea into account, and my contact lenses, and the fact that they may slightly shift on my corneas as I look around, could cause these errors. Either way, it’s something that needs to be addressed. I’m assuming that contact lenses will get a boost in popularity once VR gets released commercially.

I wish I could compare SMI’s eye tracking accuracy to FOVE’s, but FOVE’s streamlined demo (how large were those object hitboxes anyway?) doesn’t allow that. We’ll have to wait for FOVE dev kits to reach knowledgeable developers.

I want to briefly mention AltspaceVR’s eye tracking integration. There were two components: gaze-based navigation and interaction, which worked well when bringing target objects or locations into center view, and avatar eye animation, which mapped eye directions and eye blinks onto AltspaceVR’s cute robot avatars. This latter component was quite janky; blinking detection worked only some of the time, and when looking at my own avatar in a mirror, my avatar’s eyes were looking back at me at most half the time. I’m assuming that eye tracking was a last-minute addition that still needs some improvements. But it was nice to see it done at all.

Wearality Sky Lenses

I had a second chance to try Wearality‘s 150° FOV lenses, after my first encounter while waiting in the (interminable) check-in line at the SVVR ’15 Expo. Wearality’s biggest claim to fame, obviously, is the advertised 150° field of view. This time I had a chance to give that number a quick reality check. How does one measure the field of view of a VR HMD? It’s normally not easy, but the Wearality Sky’s open design made it possible. While wearing the headset, I was able to peek underneath the lenses, and see the real-world background through the headset’s open design (see Figure 6).

Figure 6: Wearality Sky clip-on head-mounted display for smartphones.

While wearing the headset, and splitting my focus between the virtual environment and the real-world background, I stepped back from Wearality’s display table until the left and right edges of the table lined up with the left and right edges of the image as seen through the lenses. After taking off the headset, I measured my distance from the table to about three feet. With the table having been around six feet wide, that yields an approximate field of view of 90°, not 150°. When I asked the person handing out headsets about this, he replied that the 150° number applies to larger-screen phones than the one being demonstrated. Unfortunately I did not measure the size of the phone that I had used, but it looked bigger than a Samsung Galaxy S5, maybe 5.5″.

Wearality’s web site doesn’t mention anything about FOV being screen size dependent — though granted, that should be obvious going in — and doesn’t provide a chart of FOV vs size. And the person handing out headsets at the conference happily kept telling people to check out the 150° FOV, even after he told me that this number only applies to bigger screens. What’s up with that?

I found out in the meantime that Wearality has a demo that can switch between different FOVs by putting differently sized black borders around the rendered image. As I did not know this during AWE I did not check for it explicitly, but I was looking for evidence of chromatic aberration and lens distortion, and am fairly sure there were no such black borders. In any case, what would have been the point of handing out headsets with artificially reduced FOVs while bragging about high FOV, without mentioning that the FOV is reduced and can be increased interactively?

6 thoughts on “On the Road for VR: Augmented World Expo 2015, Part I: VR

  1. Would cheap lenses (plus possibly some hack on the software side to correct the additional distortions) work to increase the FOV of the CastAR projectors, or would they mess up the polarization?

  2. Pingback: On the Road for VR: Augmented World Expo 2015, Part I: VR (doc-ok) - VR News

  3. Pingback: Volume Labs » Augmented Reality becomes Real.

  4. “what would have been the point of handing out headsets with artificially reduced FOVs while bragging about high FOV”

    To compare their FoV capability to their competitors, of course. It’s actually a rather elegant way of demonstrating this difference in a single package. Yes, they could have explained this better, but it is what it is.

  5. I’m glad it turns out you know about CastAR, and even got to go eyes-on. I’d hit a couple of your posts at some point, but I randomly came back and have been reading through the archives over the last few weeks. As a CastAR fanboy I was getting increasingly agitated – “He likes CAVEs, cares about ergonomics, writes about vergence-accommodation conflict, aims at telepresence and collaboration… this couldn’t be more relevant to his interests!”

    I am letting you know, in case you don’t already, that CastAR is back, with the company and the product both named Tilt Five. Long story short, the Technical Illusions founders got a couple rounds of VC funding, gave KS backers’ money back due to the conflict of interest, got tricked out of power by the money men, the money men hired everybody in the world and burned/laundered all the money and the company cratered a year or two later. After another year or so, the inventor Jeri Ellsworth and a small squad bought the IP back from the bank it had been mortgaged to for another try.

    They ran another Kickstarter last October and I happily backed again.They’d hoped to deliver alpha units in spring and consumer units in summer of 2020. Unsurprisingly COVID-19 put paid to that, though I’d have expected a few months’ slip anyway because things happen. The current state is that they delivered alpha units over the course of the summer and have done several beta production runs at their production partners to make what seem to me like normal tweaks – examples are a cosmetic hinge misalignment due to distortion in an injection mold and insufficient glue in some cases on one part in their projection engines. They’re now aiming for consumer delivery in Q1 2021.

    One important thing you haven’t mentioned any of the times you talked about CastAR, and some things that may have changed since you saw the tech:

    T5 will be the first VR or AR HMD outside a lab not to provoke vergence-accommodation conflict. (The Avegant Glyph’s LED-illuminated DLP virtual retina display supposedly has the same property, but the Glyph isn’t VR.) The first factor in this property is the same reason a scanning laser display could theoretically always be in focus, just less so: The internal optics of LED-illuminated LCOS picoprojectors tend to lead to a high f-number and attendant high etendue/collimation, and retroreflection mostly preserves that etendue. The second factor, related to T5’s use of a surface relatively far from the eye, is that each point on the display is projected to a patch of large physical size behind its virtual location. This means that wherever the eye happens to be focused upon arrival after a saccade, the eye’s optics “select” an annular subset of the light that creates a sharp image on the retina and the accommodative reflex is satisfied. CastAR prototypes with off-the-shelf projectors already largely lacked VAC; when the time came to design their own projection engines, T5 were able to optimize for even higher f-number. It’s not quite equivalent to a light field or hologram display in that objects that “should” be out of focus are still in focus, but ergonomically, avoiding the unreality and discomfort of VAC achieves the vast majority of the goal.
    Projectors are 1280×720 per eye. FOV is pretty much an editorial choice with HMPDs; T5 has chosen to stick with 15ppd resolution, increasing the FOV to 110 diagonal.
    There’s an IMU now.
    There is now an ASIC on the headset which reprojects/timewarps the last received framebuffers in 6DOF from the optical/IMU fusion. It does this for each field of the color-sequential LCOS display, so the image is stabilized at 180Hz. The illusion breaks down under large translations but is robust under rotations and small translations; its purpose is to allow use of headsets at low render framerates on modest and/or shared compute hardware. At even 5FPS, if your head moves 30cm to the side in 1/5th of a second you have a problem in real life that’s more important than your virtual thing looking janky for a moment.
    The headset tracking is now based on a constellation of dots in the retroreflective material around the edge of the board, with onboard illumination at the tracking camera’s bandpass wavelength of 850nm. This creates a physically larger fiducial pattern, larger and more uniformly illuminated blobs for the algorithm to read, and less vulnerability to other flickering IR light sources in the environment.
    The wand tracking is now run via LEDs on the wand through the same camera and chip as the headset’s position tracking, so it’s explicitly headset-relative. An IMU in the wand communicates with the headset over Bluetooth Low Energy, along with the stick and buttons of course.
    There’s now a 4K monochrome 940nm webcam accessible to software, intended for image processing to identify and track QR codes, objects, simple hand pose data… also with onboard illumination, pulsable to reduce motion blur. Largely ignores skin color, due to largely ignoring skin – bounces off the hoo-man flesh underneath.

    Phew! I think that’s most of the update infodump. In theory it’s USD300 for a set including headset, wand and board, but preorders of that are sold out. The USD360 set, with a board extension that can also tip up into a quarter-pipelike arrangement for taller virtual space in tabletop use, is still available. The main Kickstarter video is composited because of the need for lighting and camera angles that let actors communicate the experience to non-techies, but otherwise the campaign and updates are very informative, both in words and in videos and GIFs shot through the prototypes. https://www.kickstarter.com/projects/tiltfive/holographic-tabletop-gaming

    One of the best of those videos is a hands-on and technical interview with Norman Chan of Tested, who handles just about every HMD as part of his job. Ellsworth is definitely a friend of the show, including having appeared before for CastAR, but I don’t think he’d sacrifice his credibility by spinning the interview too much, or by falsifying the testimonials he gave to the campaign. https://www.youtube.com/watch?v=Jse-GwkcYgI

    It presents as a display device and needs USB3.1 with an alt mode, presumably DisplayPort. Plugins for Unity and Unreal engines will be available, as well as a C API that can talk to everything. Some of the team do their development on Ubuntu so it definitely works in Linux, but there’s conflicting info about officialness and timelines for Linux support at launch. It’s probably not a big topic right now.

    Following T5 also got me onto the research work Soomro and Urey have done on using PHMDs, of which T5’s the only one, with transparent retroreflective screens. I have tinkering dreams along those lines, but the proof of concept connected to your work doesn’t rely on it. https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-26-2-1161&id=380733 “Integrated display and imaging” uses their custom screen, an array of curved mirrors, and one camera – but it’s been overtaken by economics. These days it would probably be cheaper and easier, with better results, to wire up a dozen obsolete cell phone cameras on their standardized bus; a real camera array like CamRay https://hal.archives-ouvertes.fr/hal-01544645/document rather than a virtual one. One rectangle (or wall) of retro later (with tiny holes scattered through the display area), you have the equivalent of a Facebook Portal but big, light, foldable, 3D, with correct sightlines, low power consumption, and no moving parts. Admittedly, maybe eye contact doesn’t matter as much if everyone’s wearing 3D glasses – but I’m also interested in how bad the ghosting actually is from crosstalk at typical distances. Maybe users could get away with cutting off the polarized lenses and wearing just the headband.

  6. I came back to refer to this and was reminded that I’d misinformed myself on one point when I wrote it. Tilt Five doesn’t rely on USB alt modes. They (wisely) rolled their own blitting protocol. It needs 5Gbps on USB3, so USB 3.0 Superspeed or any later generation. Anything but the first couple years of tentative USB3.

Leave a Reply

Your email address will not be published. Required fields are marked *