A cluster of earthquakes always gets the news media interested in geology, at least for a short time, and Monday’s 4.4 in southern California, following last week’s series of north coast quakes up to 6.9, was no different. Our local media’s go-to guy for earthquakes and other natural hazards is Dr. Gerald Bawden of the USGS Sacramento. Gerald also happens to be one of the main users of the KeckCAVES visualization facility and KeckCAVES software, and so he took an interview with our local Fox-affiliate in the CAVE, “to get out of the wind,” as he put it.
Here’s the video. Caution: ads after the jump.
You’ve been warned. Just in case you’re wondering, the ads in the video are from, and benefit, FOX 40, not me.
As usual, the video doesn’t reflect how the CAVE is actually used by Gerald to study natural hazards — that wouldn’t easily fit into a 30-second video snippet (what’s shown in the video is KeckCAVES’ “training course”). So let me try to rectify that a bit.
Gerald’s primary tool is LiDAR (an acronym for Light Detection And Ranging, or a portmanteau of “light radar,” whatever floats your boat), or, more specifically, terrestrial LiDAR. In a nutshell, a LiDAR scanner shoots out a laser ray, which then hits some surface in the environment, and is partially reflected back to the LiDAR scanner. Based on the time it took the light to get there and back again, or based on phase differences between outgoing and incoming light, the LiDAR scanner can calculate the precise distance from the scanner to the surface point that was hit. If the outgoing direction of the laser ray is known, direction+distance define a single point in 3D space. If one laser ray captures one 3D surface point, then scanning the laser ray left/right and up/down can capture many such points, creating a dense 3D model of a physical environment. Figures 1 and 2 show an example LiDAR scan of a house dangling over a fresh landslide scarp (a data set Gerald created several years ago).
Figure 2 is a different viewpoint of the same house, to show the three-dimensional and point-based nature of LiDAR scans. Figure 1 shows approximately 500,000 3D points, and Figure 2 shows approximately 50,000.
Gerald primarily uses LiDAR to study changes in natural landscapes, such as gradual slip along the San Andreas fault, or the sudden change in response to an earthquake. Doing so requires working closely with very complex and large 3D data sets (individual LiDAR scans can contain anywhere from several million to tens of billions of points), which in turn requires special software, and benefits from using holographic displays. For the former, Gerald uses KeckCAVES’ LiDAR Viewer; for the latter, he uses either the KeckCAVES facility, or a smaller stereoscopic (but not holographic) display in his office.
More precisely, Gerald’s workflow consists of several separate processing, visualization, and/or analysis tools, but LiDAR Viewer plays an important role because it is the one that lets him interact with an entire LiDAR scan at once while still showing every detail, allowing him to look for data quality issues such as scan alignment problems, to remove noise, to identify features, or to take measurements for further analysis. As a result, Gerald spends a significant amount of time in the CAVE (and we can always tell he was there by the presence of diet cola cans in the wastebasket).
As an aside, Gerald recently upgraded his LiDAR hardware. His old scanner (which he used to create the data set shown in Figures 1 and 2) could only scan inside a pyramid of about 40° opening angle, had to be rotated manually to collect a full 360° scan, and could capture around 2,000 points per seconds. His new scanner is smaller and lighter, rotates automatically to collect a full 360° scan, has a built-in calibrated color camera to create true-color data sets, and can capture just shy of 1,000,000 (one million!) points per second. That’s progress for you. To test drive the new scanner, Gerald recently completed a very-high resolution scan of the KeckCAVES facility itself (to put the CAVE into the CAVE, so to speak). I’ll have to make a video featuring that one soon. Update: I kinda just did, after a fashion: Embedding 2D Desktops into VR.
Too bad they didn’t also used a tracker and shutter for the camera
I know… I wasn’t there for this one, but even if I had been, and had insisted on filming the CAVE properly, they probably would still have decided to use the “bad” footage in the end. It looks “more dynamic” that way, they say. We’ve had much better luck with documentary filmmakers; news crews are almost impossible to control.
“…Gerald recently completed a very-high resolution scan of the KeckCAVES facility itself (to put the CAVE into the CAVE, so to speak). I’ll have to make a video featuring that one soon.”
How about uploading the data in an upload area here on the blog?
I’d love to make a virtual studio visit 🙂
Data set size is around 20 GB. I’m afraid if I upload it here, it will clog my pipes.
Isn’t there a way to stream just the data needed to display it on an user’s machine, and cache stuff it already got once?
Here, check this presentation: https://www.youtube.com/watch?v=a6NoWIdgKAw (It doesn’t give out all the secrets, but it’s enough for some inspiration i think)
Hm, wait, you already seem to have somthing in that direction: https://www.youtube.com/watch?v=TXK26JomCyU …
The octree-based data format I use for LiDAR data is well applicable to streaming; I just haven’t gotten around to implementing the client/server protocol and the server yet (it hasn’t been a demand thus far). LiDAR Viewer already uses an out-of-core rendering technique that basically treats a local hard disk like a remote server and does RAM caching. It’s just a matter of replacing local disk accesses with network operations.
I have a quadtree-based high-resolution topography and imagery viewer that does data streaming the same way, and it works quite well.
I can imagine.. The dilemma of lidar data.
I suspect you’ve heard about the Oculus->FB deal.
Would you lighten up or cement the pessimism aired in the gaming community on the subject? I guess this is falls in the topic-for-another-post category.
Considering the fate of other startups bought by Facebook, and Facebook’s questionable ethics when it comes to their users, i am worried.
In an ideal world, Facebook would get a karma upgrade from supporting a company like Oculus, but considering Facebooks reputation and the size difference, from my perspective looks like FB’s miasma will infect Oculus 🙁
.
.
Selling a company this early usually is very bad for the company itself (often it’s bought just for parts, they gut it for it’s employees and patents and leave it to rot; or management changes destroy company culture and get in the way of them developing their product and treating their customers how they used to); though in some cases that is secretly better than the alternative, which could have been the company being so unprofitable it is just shutdown (the owners bailed out with some profit (or at least with a smaller loss), and gave their former employees a chance to get new jobs more easily). I can’t predict the future, nor do i have insider information about the Oculus company; but either way, i am worried, usually in the past stuff like this didn’t turn out good in the end….
Oh, here is their official video about it: https://www.youtube.com/watch?v=Irf-HJ4fBls (not sure why i didn’t find it the first time i tried)
Oops, seems i didn’t click reply….weird, i could swear i had…
Oh yeah, I know Euclideon well. They’re using the same technology as I am, but they have much nicer data to play with. They don’t do dynamic lighting or splat rendering, though, and of course their stuff doesn’t work in VR.
What do you mean nicer data? Access to high resolution lidar data?
They use very high-resolution scans for their demos and videos, but the biggest difference is that they apparently spend a lot of time cleaning up their data to make it look as good as possible. We don’t clean up data beyond what’s necessary to get the science done, so it doesn’t end up that pretty.
Hehe.. I think all lidar data is pretty, cleaned or not, but in my opinion looking at any type of 3d-data without a stereo display is a waste. This even more when it comes to pointclouds compared to mesh cause of the transparancies. I get increasingly confused every time I’m forced to navigate pointclouds on a 2d-display.