I’ve recently realized that I should urgently write about LiDAR Viewer, a Vrui-based interactive visualization application for massive-scale LiDAR (Light Detection and Ranging, essentially 3D laser scanning writ large) data.
I’ve also realized, after going to the ILMF ’13 meeting, that I need to make a new video about LiDAR Viewer, demonstrating the rendering capabilities of the current and upcoming versions. This occurred to me when the movie I showed during my talk had a copyright notice from 2006(!) on it.
So here I am killing two birds with one stone: Meet the LiDAR Viewer!
We recently started a flood risk visualization project using high-resolution LiDAR surveys of the southern San Francisco bay, and while downloading a large number of LiDAR tiles through the (awful!) USGS CLICK interface — compare and contrast it with the similarly aimed, yet orders of magnitude better, Cal-Atlas Imagery Download Tool — I accidentally found a large data set covering downtown San Francisco (or, rather, all of San Francisco, and then some). Tall buildings and city streets are a lot more interesting than marshes and sandbanks, and so I decided to assemble the best possible 3D model of downtown San Francisco I could, based on available public domain LiDAR data and aerial photography, and show off LiDAR Viewer with it. The San Francisco LiDAR survey, flown in 2010, was co-sponsored by the USGS and San Francisco State University, towards goals of the American Recovery and Reinvestment Act (see the ARRA Golden Gate LiDAR Project Webpage for more details). The aerial photography is of unknown provenance and vintage, but was available for download as part of the “HiRes Urban Imagery” collection at Cal-Atlas. Based on the captured states of ongoing construction in San Francisco, I’m estimating the imagery was collected between 2007 and 2009. If someone out there knows details, please tell me.
In total, I downloaded about 22 GB of LiDAR data in LAS format, and about 9 GB of aerial color imagery in GeoTIFF format. The former through CLICK (ugh!), the latter through Cal-Atlas (aah!). That provided me with 832 million 3D points, and 3.2 billion color pixels to map onto them. Using LiDAR Viewer’s pre-processor, it took 19 minutes to convert the LAS files into a single LiDAR octree, and then 7.5 minutes to map the color images onto the points. Finally, it took 40 minutes calculating the per-point normal vector information required for real-time illumination and splat rendering in LiDAR Viewer. For comparison, it took three days to download the LAS and GeoTIFF files (I did this on my computer at home, on AT&T DSL, not in my lab).
Talking in detail about LiDAR Viewer’s underlying technology here would be redundant — just go to the web site — but in a nutshell, LiDAR Viewer supports seamless exploration of extremely large, both spatially (hundreds of square kilometers) and in data size (Terabytes), data sets using out-of-core, multiresolution, view-dependent rendering techniques. Basically, it does for 3D point cloud data what Google Maps does for 2D imagery, and Google Earth does for global topography (or, for the VR-inclined, replace Google Maps with Image Viewer and Google Earth with Crusta).
Like all Vrui-based software, LiDAR Viewer is primarily aimed at holographic display environments with natural 3D interaction, i.e., CAVEs or fully-tracked head-mounted displays, or head-tracked 3D TVs. But I decided to use the desktop configuration of LiDAR Viewer for this video; for one, because I was doing this from home; second, to drive home the point that Vrui software on the desktop is used in pretty much exactly the same way as native desktop software, and that Vrui’s desktop mode is fully usable, and not a debugging-only afterthought like in other VR toolkits that shall remain unnamed. And third, because I really wanted to show off the awesome San Francisco scan, and it would have suffered if filmed with a video camera off a large-scale projection screen. And fourth, because I wanted to make the first 1 1/2 minutes of the video look as if the data were completely 2D, just to mess with people. 😉
So what are the new features in LiDAR Viewer? I’m still working towards version 3.0, which will address the two fundamental flaws of version 2.x: limited precision and pixel rendering.
Limited precision stems from LiDAR Viewer’s representation of point coordinates as 32-bit IEEE floating-point numbers. While floats can represent values between about -1038 and +1038, they only have around 7 decimal digits of precision. That sounds like a lot, and isn’t a problem for terrestrial LiDAR scans where spatial extents are usually in the hundreds of meters, and which are often not geo-referenced. But airborne LiDAR are typically in UTM coordinates, where huge offset values are added to point coordinates. If a data set has 1mm natural accuracy and an extent of +-100 meters, but 4 million meters are added to all y coordinates, the accuracy suddenly drops to around 1m. That is clearly not acceptable. LiDAR Viewer 2.x employs a workaround that transparently removes these offset values, but with very large surveys covering hundreds of square kilometers, loss of accuracy still begins to creep in towards the edges of the data. While the use of IEEE floats is inexcusable in hindsight, LiDAR Viewer’s development started in 2004, when this problem wasn’t even on the horizon.
LiDAR Viewer 3.0 will solve the problem for good by using a multiresolution coordinate representation, which virtually allocates 80 bit for each coordinate, while only really storing 16 bits for each coordinate for each point. 80 bits of precision would allow mapping the entire solar system at 7.5 picometer resolution, which is of course ridiculous. I chose 80 bits because it happens to be the combination of a 64-bit implicit prefix and a 16-bit explicit suffix; I’m not expecting that the range would ever be used. But then who knows. As a side effect, moving to a multiresolution representation will immediately shrink file sizes by one half.
The second issue is pixel rendering, which becomes a problem when zooming into LiDAR data beyond the scale supported by the data’s density. In other words, when looking at a point cloud from far away, the individual pixels will fuse and form a continuous surface, but when looking close up, the solid surface will dissolve (see Figures 2 and 3). On the one hand, one probably shouldn’t look at data at zoom factors not supported by the data, but on the other hand, everyone always does. LiDAR Viewer will use a splat renderer that draws individual points as surface-aligned shaded disks instead of pixels. Because the disks’ radii are defined in model space, they will fuse no matter the zoom factor. Figures 2-5 show an example comparing LiDAR Viewer 2.x’s point rendering with a preview of LiDAR Viewer 3.0’s splat rendering.
The current splat renderer is only an experimental implementation and a work in progress, but it is good enough that I felt comfortable showing it off in the video, sort of as a “coming attractions” teaser. At least the scientist users of LiDAR Viewer are very excited about it.