Apple Patents Holographic Projector (no, not quite)

About once a day I check out this blog’s access statistics, and specifically the search terms that brought viewers to it (that’s how I found out that I’m the authority on the Oculus Rift being garbage). It’s often surprising, and often leads to new (new to me, at least) discoveries. Following one such search term, today I learned that Apple was awarded a patent for interactive holographic display technology. Well, OK, strike that. Today I learned that, apparently, reading an article is not a necessary condition for reblogging it — Apple wasn’t awarded a patent, but a patent application that Apple filed 18 months ago was published recently, according to standard procedure.

But that aside, what’s in the patent? The main figure in the application (see Figure 1) should already clue you in, if you read my pair of posts about the thankfully failed Holovision Kickstarter project. It’s a volumetric display of some unspecified sort (maybe a non-linear crystal? Or, if that fails, a rotating 2D display? Or “other 3D display technology?” Sure, why be specific! It’s only a patent! I suggest adding “holomatter” or “mass effect field” to the list, just to be sure.), placed inside a double parabolic mirror to create a real image of the volumetric display floating in air above the display assembly. Or, in other words, Project Vermeer. Now, I’m not a patent lawyer, but how Apple continues to file patents on the patently trivial (rounded corners, anyone?), or some exact thing that was shown by Microsoft in 2011, about a year before Apple’s patent was filed, is beyond me.

Figure 1: Main image from Apple’s patent application, showing the unspecified 3D image source (24) located inside the double-parabolic mirror, and the real 3D image of same (32) floating above the mirror. There is also some unspecified optical sensor (16) that may or may not let the user interact with the real 3D image in some unspecified way.

Continue reading

Someone at Oculus is Reading my Blog

I am getting the feeling that Big Brother is watching me. When I released the inital version of the Vrui VR toolkit with native Oculus Rift support, it had magnetic yaw drift correction, which the official Oculus SDK didn’t have at that point (Vrui doesn’t use the Oculus SDK at all to talk to the Rift; it has its own tracking driver that talks to the Rift’s inertial movement unit directly via USB, and does its own sensor fusion, and also does its own projection setup and lens distortion correction). A week or so later, Oculus released an updated SDK with magnetic drift correction.

A little more than a month ago, I wrote a pair of articles investigating and explaining the internals of the Rift’s display, and how small deviations in calibration have a large effect on the perceived size of the virtual world, and the degree of “solidity” (for lack of a better word) of the virtual objects therein. In those posts, I pointed out that a single lens distortion correction formula doesn’t suffice, because lens distortion parameters depend on the position of the viewers’ eyes relative to the lenses, particularly the eye/lens distance, otherwise known as “eye relief.” And guess what: I just got an email via the Oculus developer mailing list announcing the (preview) release of SDK version 0.3.1, which lists eye relief-dependent lens correction as one of its major features.

Maybe I should keep writing articles on the virtues of 3D pupil tracking, and the obvious benefits of adding an inertially/optically tracked 6-DOF input device to the consumer-level Rift’s basic package, and those things will happen as well. 🙂

Continue reading

New Adventures in Blogging

Aside

Today I learned what happens when an article gets reblogged (is that a word now?) that does not have a picture in it, and apparently an embedded YouTube video does not count as a picture: the blogging software scoops up whatever the first picture below the article headline is, and if that picture happens to be a commenter’s mugshot, so be it. Proof: see Figure 1.

Figure 1: How an article without a picture in the article is represented when reblogged by an artificial “intelligence.”

Note to self: always add a picture. Now I’m curious to see what will happen if this article gets reblogged. After all, it has a picture in it…

How to Measure Your IPD

Update: There have been complaints that the post below is an overly complicated and confusing explanation of the IPD measurement process. Maybe that’s so. Therefore, here’s the TL;DR version of how the process works. If you want to know why it works, read on below.

  1. Stand in front of a mirror and hold a ruler up to your nose, such that the measuring edge runs directly underneath both your pupils.
  2. Close your right eye and look directly at your left eye. Move the ruler such that the “0” mark appears directly underneath the center of your left pupil. Try to keep the ruler still for the next step.
  3. Close your left eye and look directly at your right eye. The mark directly underneath the center of your right pupil is your inter-pupillary distance.

Here follows the long version:

I’ve recently talked about the importance of calibrating 3D displays, especially head-mounted displays, which have very tight tolerances. An important part of calibration is entering each user’s personal inter-pupillary distance. Even when using the eyeball center as projection focus point (as I describe in the second post linked above), the distance between the eyeballs’ centers is the same as the inter-pupillary distance.

So how do you actually go about determining your IPD? You could go to an optometrist, of course, but it turns out it’s very easy to do it accurately at home. As it so happened, I did go to an optometrist recently (for my annual check-up), and I asked him to measure my IPD as well while he was at it. I was expecting him to pull out some high-end gizmo, but instead he pulled up a ruler. So that got me thinking.

Figure 1: How to precisely measure infinity-converged inter-pupillary distance using only a mirror and a ruler. Focus on the left eye in step one and mark point A; focus on the right eye in step two and mark point B; the distance between points A and B is precisely the infinity-converged inter-pupillary distance (and also the eyeball center distance).

Continue reading

What is Presence?

Disclaimer: Presence research is not my area of expertise. I’m basically speaking as an interested layperson, and just writing down some vaguely related observations that have re-occurred to me recently.

So, presence. What is presence, and why should we care? Libraries full of papers have been written about it, and there’s even a long-running journal of that title. I guess one could say that presence is the sensation of bodily being in a place or environment where one knows one is not. And why is it important in the discussion of virtual reality? Because it is often trotted out as the distinguishing feature between the medium of VR (yes, VR is the medium, not the content) and other media, such as film or interactive 3D graphics; in other words, it is often a feature that’s used to sell the idea of VR (not that there’s anything wrong with that).

But how does one actually measure presence, and know that one has achieved it? Some researchers do it by putting users into fMRI machines, but that’s not really something you can do at home. So here are a few things I’ve observed over sixteen years of working in VR, and showing 3D display environments and 3D software to probably more than 1,000 people, both experts and members of the general public:

Fighting Motion Sickness due to Explicit Viewpoint Rotation

Here is an interesting innovation: the developers at Cloudhead Games, who are working on The Gallery: Six Elements, a game/experience created for HMDs from the ground up, encountered motion sickness problems due to explicit viewpoint rotation when using analog sticks on game controllers, and came up with a creative approach to mitigate it: instead of rotating the view smoothly, as conventional wisdom would suggest, they rotate the view discretely, in relatively large increments (around 30°). And apparently, it works. What do you know. In their explanation, they refer to the way dancers keep themselves from getting dizzy during pirouettes by fixing their head in one direction while their bodies spin, and then rapidly whipping their heads around back to the original direction. But watch them explain and demonstrate it themselves. Funny thing is, I knew that thing about ice dancers, but never thought to apply it to viewpoint rotation in VR.

Figure 1: A still from the video showing the initial implementation of “VR Comfort Mode” in Vrui.

This is very timely, because I have recently been involved in an ongoing discussion about input devices for VR, and how they should be handled by software, and how there should not be a hardware standard but a middleware standard, and yadda yadda yadda. So I have been talking up Vrui‘s input model quite a bit, and now is the time to put up or shut up, and show how it can handle some new idea like this.

Continue reading

I Can’t Ever Get Over This Mars Thing, Can I?

I have talked about KeckCAVES’ involvement in the Curiosity Mars Rover missions several times before, but I just found a set of cool pictures that I have not shared yet. I just saw a reddit thread about a VR application to walk on the moon, one commenter asked about doing the same for Mars, and one thing led to another.

Can an application like that be done for Mars? Do we have enough data, and are the data publicly available? The answers are “yes, already done,” “kinda,” and “yes, but,” respectively.

As of my last checking, there are two main sources of topography data for Mars. The older source is an orbital laser range survey done by the Mars Orbiter Laser Altimeter (MOLA). This is essentially a planetary LiDAR scan, and can be visualized using LiDAR Viewer. The two pictures I mention above are these (Figures 1 and 2):

Figure 1: Global visualization of Mars topography using the MOLA data set, rendered using LiDAR Viewer. Vertical scale is 5:1.

Figure 2: Close-up of global Mars topography data set (centered on the canals), showing individual laser returns as grey dots. The scan lines corresponding to individual orbital periods can clearly be identified. Vertical scale is 5:1.

Continue reading

More on Desktop Embedding via VNC

I started regretting uploading my “Embedding 2D Desktops into VR” video, and the post describing it, pretty much right after I did it, because there was such an obvious thing to do, and I didn’t think of it.

Figure 1: Screenshot from video showing VR ProtoShop run simultaneously in a 3D environment created by an Oculus Rift and a Razer Hydra, and in a 2D environment using mouse and keyboard, brought into the 3D environment via the VNC remote desktop protocol.

Continue reading