So, apparently this is a business model: you trawl YouTube for videos with a decent number of views (not too many, mind you), uploaded by someone that is not a bona fide YouTube star or well-known personality, or part of some content or ad exchange network, and file copyright claims on those videos. But you’re nice about it. You don’t threaten to take down those videos right away, you just give a friendly heads-up that some of the content in those videos is owned by you, and that you are therefore entitled to monetize those videos on behalf of (and instead of) the uploader. No big deal, it’s only fair, right?
Well, granted, it can be. YouTube is obviously a cesspool of blatant and gleeful copyright infringement. I especially like those uploaders who are rather clueless about copyright, and think it’s perfectly fine to rip off and upload someone else’s work, be it a music video or TV show episode or whole movie, as long as they put a disclaimer like “uploaded under fair use” or “no copyright infringement intended” into the video description. I very especially like the second “excuse,” because the cognitive dissonance is so delicious. “I just threw a well-aimed brick through your window, but I totally didn’t intend to do that!” Sure you didn’t. I have a few semi-popular videos on YouTube myself, and it grinds my gears if someone else re-uploads them, in lousy quality and with ads plastered all over. There was one case early on where a re-uploaded video got significantly more views and discussion than my original, and I had to go over there and answer questions. Wasn’t cool.
Anyway, back to topic. While there needs to be some mechanism for copyright holders to assert their rights, the current mechanism seems to be skewed towards appeasing “big content providers,” and seems open to abuse by, well, scum. For the former, exhibit A: “Sony Filed a Copyright Claim Against the Stock Video I Licensed to Them.” There’s really nothing I can add to that except that this is an instance where someone’s livelihood was seriously messed with.
As for (likely) abuse, last night I noticed a Content ID copyright claim on one of my aforementioned semi-popular videos: Continue reading →
The relevant part of the book, starting on page 29, is available online via Google Books. It’s cringeworthy because I’m talking about basically the same things I’m talking about these days, but had a hard time as this was before the current VR renaissance. I probably failed entirely to get my main points across to an audience that had never experienced VR, and had never considered it anything but an old and busted thing from the ’90s.
I finally managed to get the Oculus Rift DK2 fully supported in my Vrui VR toolkit, and while there are still some serious issues, such as getting the lens distortion formulas and internal HMD geometry exactly right, I’ve already noticed something really neat.
I have a bunch of graphically simple applications that run at ridiculous frame rates (some get several thousand fps on an Nvidia GeForce 770 GTX), and with some new rendering configuration options in Vrui 4.0 I can disable vsync, and render directly into the display window’s front buffer. In other words, I can let these applications “race the beam.”
There are two main results of disabling vsync and rendering into the front buffer: For one, the CPU and graphics card get really hot (so this is not something you want to do this naively). But second, let’s assume that some application can render 1,000 fps. This means, every millisecond, a new complete video frame is rendered into video scan-out memory, where it gets picked up by the video controller and sent across the video link immediately. In other words, almost every line of the Rift’s display gets a “fresh” image, based on most up-to-date tracking data, and flashes this image to the user’s retina without further delay. Or in other words, total motion-to-photon latency for the entire screen is now down to around 1ms. And the result of that is by far the most solid VR I’ve ever seen.
Since Microsoft’s Build 2015 conference, and increasingly since Microsoft’s showing at E3, everybody (including me) has been talking about HoloLens, and its limited field of view (FoV) has been a contentious topic. The main points being argued (fought) about are:
What exactly is the HoloLens’ FoV?
Why is it as big (or small) as it is, and will it improve for the released product?
How does the size of the FoV affect the HoloLens’ usability and effectiveness?
Were Microsoft’s released videos and live footage of stage demos misleading?
How can one visualize the HoloLens’ FoV in order to give people who have not tried it an idea what it’s like?
Measuring Field of View
Initially, there was little agreement among those who experienced HoloLens regarding its field of view. That’s probably due to two reasons: one, it’s actually quite difficult to measure the FoV of a headmounted display; and two, nobody was allowed to bring any tools or devices into the demonstration rooms. In principle, to measure see-through FoV, one has to hold some object, say a ruler, at a known distance from one’s eyes, and then mark down where the apparent left and right edges of the display area fall on the object. Knowing the distance X between the left/right markers and the distance Y between the eyes and the object, FoV is calculated via simple trigonometry: FoV = 2×tan-1(X / (Y×2)) (see Figure 1).
Figure 1: Calculating field of view by measuring the horizontal extent of the apparent screen area at a known distance from the eyes. (In this diagram, FoV is 2×tan-1(6″ / (6″×2)) = 53.13°.)
Last Friday I made a trek down to the San Francisco peninsula, to visit and chat with a couple of other VR folks: Cyberith, SVVR, and AltspaceVR. In the process, I also had the chance to try a couple of VR devices I hadn’t seen before.
Virtual locomotion, and its nasty side effect, simulator sickness, are a pretty persistent problem and timely topic with the arrival of consumer VR just around the corner. Many enthusiasts want to use VR to explore large virtual worlds, as in taking a stroll through the frozen tundra of Skyrim or the irradiated wasteland of Fallout, but as it turns out, that’s one of the hardest things to do right in VR.
Figure 1: Cyberith Virtualizer, driven by an experienced user (Tuncay Cakmak). Yes, you can jump and run, with some practice.
I attended the Augmented World Expo (AWE) once before, in 2013 when I took along an Augmented Reality Sandbox. This time, AWE partnered with UploadVR to include a significant VR subsection. I’m going to split my coverage, focusing on that VR component here, while covering the AR offering in another post.
eMagin 2k×2k VR HMD
eMagin’s (yet to be named) new head-mounted display was the primary reason I went to AWE in the first place. I had seen it announced here and there, but I was skeptical it would be able to provide the advertised field of view of 80°×80°. Unlike Oculus Rift, HTC/Valve Vive, or other post-renaissance HMDs, eMagin’s is based on OLED microdisplays (unsurprisingly, with microdisplay manufacture being eMagin’s core business). Previous microdisplay-based HMDs, including eMagin’s own Z800 3DVisor, were very limited in the FoV department, usually topping out around 40°. Magnifying a display that measures around 1cm2 to a large solid angle requires much more complex optics than doing the same for a screen that’s several inches across.
Figure 1: eMagin’s unnamed 2k x 2k, 80×80 degree FoV, VR HMD with flip-up optics.
Yesterday, I attended the second annual Silicon Valley Virtual Reality Conference & Expo in San Jose’s convention center. This year’s event was more than three times bigger than last year’s, with around 1,400 attendees and a large number of exhibitors.
Unfortunately, I did not have as much time as I would have liked to visit and try all the exhibits. There was a printing problem at the registration desk in the morning, and as a result the keynote and first panel were pushed back by 45 minutes, overlapping the expo time; additionally, I had to spend some time preparing for and participating in my own panel on “VR Input” from 3pm-4pm.
The panel was great: we had Richard Marks from Sony (Playstation Move, Project Morpheus), Danny Woodall from Sixense (STEM), Yasser Malaika from Valve (HTC Vive, Lighthouse), Tristan Dai from Noitom (Perception Neuron), and Jason Jerald as moderator. There was lively discussion of questions posed by Jason and the audience. Here’s a recording of the entire panel:
One correction: when I said I had been following Tactical Haptics‘ progress for 2.5 years, I meant to say 1.5 years, since the first SVVR meet-up I attended. Brainfart. Continue reading →
I have briefly mentioned HoloLens, Microsoft’s upcoming see-through Augmented Reality headset, in a previous post, but today I got the chance to try it for myself at Microsoft’s “Build 2015” developers’ conference. Before we get into the nitty-gritty, a disclosure: Microsoft invited me to attend Build 2015, meaning they waived my registration fee, and they gave me, like all other attendees, a free HP Spectre x360 notebook (from which I’m typing right now because my vintage 2008 MacBook Pro finally kicked the bucket). On the downside, I had to take Amtrak and Bart to downtown San Francisco twice, because I wasn’t able to get a one-on-one demo slot on the first day, and got today’s 10am slot after some finagling and calling in of favors. I guess that makes us even. 😛
I wasn’t able to talk about this before, but now I guess the cat’s out of the bag. About two years ago, we helped a team of archaeologists and filmmakers to visualize a very large high-resolution aerial LiDAR scan of a chunk of dense Honduran rain forest in the CAVE. Early analyses of the scan had found evidence of ruins hidden under the foliage, and using LiDAR Viewer in the CAVE, we were able to get a closer look. The team recently mounted an expedition, and found untouched remains of not one, but two lost cities in the jungle. Read more about it at National Geographic and The Guardian. I want to say something cool and Indiana Jones-like right now, but I won’t.
Figure 1: A “were-jaguar” effigy, likely representing a combination of a human and spirit animal, is part of a still-buried ceremonial seat, or metate, one of many artifacts discovered in a cache in ruins deep in the Honduran jungle. Photograph by Dave Yoder, National Geographic. Full-resolution image at National Geographic.