I was reminded today of a recent thread on the Oculus subreddit, where a redditor relayed his odd experience remotely viewing his father driving a simulated racecar:
“I decided to spectate a race he was in. I then discovered I could watch him race from his passenger seat. in VR. in real time. I can’t even begin to explain the emotions i was feeling sitting in his car, in game, watching him race. I was in the car with him. … I looked over to ‘him’ and could see all his steering movements, exactly what he was doing. I pictured his intense face as he was pushing for 1st.”
I don’t know if this effect has a name, or even needs one, but it parallels something we’ve observed through our work with Immersive 3D Telepresence:
In our work, we generally use highly effective “holographic” 3D avatars provided by real-time 3D video capture and transmission, but we have other means to represent remote users in shared virtual environments as well. Users participating from standard desktop systems are usually represented via properly positioned and oriented 2D video billboards captured from regular webcams, and our lowest-level representation is a set of simple glyphs: a ball with an arrow sticking out representing the user’s head position and viewing direction, and a cone representing the position and pointing direction of each of the user’s input devices. In other words, our most basic avatar boils down to a floating head and a floating hand.
But even so, our users have reported “seeing” the other person in those minimalist avatars, sometimes even being able to tell which person on the remote side is currently wearing the head-tracked glasses and the tracked wand, by “body language” alone. We have even seen some social protocols kick in, such as natural conflict resolution when two users reach for the same virtual object at the same time.The sense of presence of a remote person is definitely stronger using more involved representation (strongest with real-time 3D video), but apparently something remains with representations that barely deserve the name “avatar.” We have never investigated this in any formal manner, but I find it remarkable nonetheless. I want to emphasize that the sense of a remote person being present is much stronger, in fact almost unavoidable, with more realistic avatars, and especially with those based on 3D video. Comparatively, what we’ve seen with simple representations is a mere shadow.
The “avatar” in the above anecdote is even more minimalistic — being a complete absence of avatar — and still, purely through a remote user’s effects on the environment, such as movements of a car’s steering wheel, could another user, probably driven by special personal circumstances, get a strong sense of a person being there. Whatever the reason, this bodes well for VR in the context of collaborative or social applications.
And here is today’s completely unrelated picture:
Well, OK, the picture is kinda related. So I lied.
(Just subscribing to the comments)
Hi, it seems as though there are mesh stitching seams in the middle of your face. Does that mean you are using Kinects to your left & right rather than front & back? Is there a specific reason for this? Thanks, Eran.
Yes, a practical reason. I’m sitting directly in front of a 72″ 3D TV, which is my window into the shared virtual world. There is no place to put a camera. The cameras are at the TV’s top-left and top-right corners, looking directly at my typical working position. It’s a bit unfortunate that this always results in a seam running down the middle of my face, but them’s the breaks. Here is a picture of the setup: Note that the camera clusters you see in the corners are our pre-Kinect 3D cameras; the two Kinects I was using for this video are in the exact same spots.