Apple Patents Holographic Projector (no, not quite)

About once a day I check out this blog’s access statistics, and specifically the search terms that brought viewers to it (that’s how I found out that I’m the authority on the Oculus Rift being garbage). It’s often surprising, and often leads to new (new to me, at least) discoveries. Following one such search term, today I learned that Apple was awarded a patent for interactive holographic display technology. Well, OK, strike that. Today I learned that, apparently, reading an article is not a necessary condition for reblogging it — Apple wasn’t awarded a patent, but a patent application that Apple filed 18 months ago was published recently, according to standard procedure.

But that aside, what’s in the patent? The main figure in the application (see Figure 1) should already clue you in, if you read my pair of posts about the thankfully failed Holovision Kickstarter project. It’s a volumetric display of some unspecified sort (maybe a non-linear crystal? Or, if that fails, a rotating 2D display? Or “other 3D display technology?” Sure, why be specific! It’s only a patent! I suggest adding “holomatter” or “mass effect field” to the list, just to be sure.), placed inside a double parabolic mirror to create a real image of the volumetric display floating in air above the display assembly. Or, in other words, Project Vermeer. Now, I’m not a patent lawyer, but how Apple continues to file patents on the patently trivial (rounded corners, anyone?), or some exact thing that was shown by Microsoft in 2011, about a year before Apple’s patent was filed, is beyond me.

Figure 1: Main image from Apple’s patent application, showing the unspecified 3D image source (24) located inside the double-parabolic mirror, and the real 3D image of same (32) floating above the mirror. There is also some unspecified optical sensor (16) that may or may not let the user interact with the real 3D image in some unspecified way.

Continue reading

2D Desktop Embedding via VNC

There have been several discussions on the Oculus subreddit recently about how to integrate 2D desktops or 2D applications with 3D VR environments; for example, how to check your Facebook status while playing a game in the Oculus Rift without having to take off the headset.

This is just one aspect of the larger issue of integrating 2D and 3D applications, and it reminded me that it was about time to revive the old VR VNC client that Ed Puckett, an external contractor, had developed for the CAVE a long time ago. There have been several important changes in Vrui since the VNC client was written, especially in how Vrui handles text input, which means that a completely rewritten client could use the new Vrui APIs instead of having to implement everything ad-hoc.

Here is a video showing the new VNC client in action, embedded into LiDAR Viewer and displayed in a desktop VR environment using an Oculus Rift HMD, mouse and keyboard, and a Razer Hydra 6-DOF input device:

Continue reading

A Follow-up on Eye Tracking

Now this is why I run a blog. In my video and post on the Oculus Rift’s internals, I talked about distortions in 3D perception when the programmed-in camera positions for the left and right stereo views don’t match the current left and right pupil positions, and how a “perfect” HMD would therefore need a built-in eye tracker. That’s still correct, but it turns out that I could have done a much better job approximating proper 3D rendering when there is no eye tracking.

This improvement was pointed out by a commenter on the previous post. TiagoTiago asked if it wouldn’t be better if the virtual camera were located at the centers of the viewer’s eyeballs instead of at the pupils, because then light rays entering the eye straight on would be represented correctly, independently of eye vergence angle. Spoiler alert: he was right. But I was skeptical at first, because, after all, that’s just plain wrong. All light rays entering the eye converge at the pupil, and therefore that’s the only correct position for the virtual camera.

Well, that’s true, but if the current pupil position is unknown due to lack of eye tracking, then the only correct thing turns into just another approximation, and who’s to say which approximation is better. My hunch was that the distortion effects from having the camera in the center of the eyeballs would be worse, but given that projection in an HMD involving a lens is counter-intuitive, I still had to test it. Fortunately, adding an interactive foveating mechanism to my lens simulation application was simple.

Turns out that I was wrong, and that in the presence of a collimating lens, i.e., a lens that is positioned such that the HMD display screen is in the lens’ focal plane, distortion from placing the camera in the center of the eyeball is significantly less pronounced than in my approach. Just don’t ask me to explain it for now — it’s due to the “special properties of the collimated light.” 🙂

A Closer Look at the Oculus Rift

I have to make a confession: I’ve been playing with the Oculus Rift HMD for almost a year now, and have been supporting it in Vrui for a long time as well, but I haven’t really spent much time using it in earnest. I’m keenly aware of the importance of calibrating head-mounted displays, of course, and noticed right away that the scale of virtual objects seen through the Rift was way off for me, but I never got around to doing anything about it. Until now, that is.

Continue reading

Elon Musk discovers AR/VR

Serial entrepreneur Elon Musk posted this double whammy of cryptic messages to his Twitter account on August 23rd:

@elonmusk: We figured out how to design rocket parts just w hand movements through the air (seriously). Now need a high frame rate holograph generator.

@elonmusk: Will post video next week of designing a rocket part with hand gestures & then immediately printing it in titanium

As there are no further details, and the video is now slightly delayed (per Twitter as of September 2nd: @elonmusk: Video was done last week, but needs more work. Aiming to publish link in 3 to 4 days.), it’s time to speculate! I was hoping to have seen the video by now, but oh well. Deadline is deadline.

First of all: what’s he talking about? My best guess is a free-hand, direct-manipulation, 6-DOF user interface for a 3D computer-aided design (CAD) program. In other words, something roughly like this (just take away the hand-held devices and substitute NURBS surfaces and rocket parts for atoms and molecules, but leave the interaction method and everything else the same):

Continue reading

Holovision revisioned

Boy, do I hate being wrong. And I was very wrong about the Holovision — A Life Size Hologram Kickstarter project (don’t bother; page down permanently) I talked about in a previous post. Why was I wrong? Because I gave those people way too much credit. I questioned their claims of life-size holograms, I questioned their PR material (because it helped me address a lingering point), but I didn’t question their most basic claim: 3D. I guess I’m too gullible.

Continue reading

The Holovision Kickstarter “scam”

Update: Please tear your eyes away from the blue lady and also read this follow-up post. It turns out things are worse than I thought. Now back to your regularly scheduled entertainment.

I somehow missed this when it was hot a few weeks or so ago, but I just found out about an interesting Kickstarter project: HOLOVISION — A Life Size Hologram. Don’t bother clicking the link, the project page has been taken down following a DMCA complaint and might not ever be up again.

Why do I think it’s worth talking about? Because, while there is an actual design for something called Holovision, and that design is theoretically feasible, and possibly even practical, the public’s impression of the product advertised on Kickstarter is decidedly not. The concept imagery associated with the Kickstarter project presents this feasible technology in a way that (intentionally?) taps into people’s misconceptions about holograms (and I’m talking about the “real” kind of holograms, those involving lasers and mirrors and beam splitters). In other words, it might not be a scam per se, and it might even be unintentional, but it is definitely creating a false impression that might lead to very disappointed backers.

Figure 1: This image is a blatant lie.

Continue reading

Vrui on (in?) Oculus Rift

I wrote about my first impressions of the Oculus Rift developer kit back in April, and since then I’ve been working (on and off) on getting it fully and natively supported in Vrui (see Figure 1 for proof that it works). Given that Vrui’s somewhat insane flexibility is a major point of pride for me, what was it that I actually had to create to support the Rift? Turns out, not all that much: a driver for the Rift’s built-in inertial tracking unit and a post-processing filter to correct for the Rift’s lens distortion were all it took (more on that later). So why did it take me this long? For one, I was mostly working on other things and only spent a few hours here and there, but more importantly, the Rift is not just a new head-mounted display (HMD), but a major shift in how HMDs are (or will be) used.

Figure 1: The trademark “double-barrel” Oculus Rift screenshot, this time generated by a Vrui application.

Continue reading

On the road for VR: Augmented World Expo 2013

I’ve just returned from the 2013 Augmented World Expo, where we showcased our Augmented Reality Sandbox. This marked the first time we took the sandbox on the road; we had only shown it publicly twice before, during UC Davisannual open house in 2012 and 2013. The first obstacle popped up right from the get-go: the sandbox didn’t fit through the building doors! We had to remove the front door’s center column to get the sandbox out and into the van. And we needed a forklift to get it out of the van at the expo, but fortunately there were pros around.

Figure 1: Me, digging into the sandbox, with a few onlookers. Photo provided by Marshall Millett.

Continue reading

The Kinect 2.0

Details about the next version of Microsoft’s Kinect, to be bundled with the upcoming Xbox One, are slowly emerging. After an initial leak of preliminary specifications on February 20th, 2013, finally some official data are to be had. This article about the upcoming next Kinect-for-Windows mentions “Microsoft’s proprietary Time-of-Flight technology,” which is an entirely different method to sense depth than the current Kinect’s structured light approach. That’s kind of a big deal.

Figure 1: The Xbox One. The box on top, the one with the lens, is probably the Kinect2. They should have gone with a red, glowing lens.

Continue reading