On the road for VR (sort of…): ILMF ’13, Denver, CO

I just returned from the 2013 International LiDAR Mapping Forum (ILMF ’13), where I gave a talk about LiDAR Viewer (which I haven’t previously written about here, but I really should). ILMF is primarily an event for industry exhibitors and LiDAR users from government agencies or private companies to meet. I only saw one other person from the academic LiDAR community there, and my talk stuck out like a sore thumb, too (see Figure 1).

Figure 1: Snapshot from towards the end of my talk at ILMF ’13, kindly provided by Marshall Millett. My talk was a bit off-topic for the rest of the conference, and at 8:30 in the morning, hopefully explaining the sparse audience.

But in the final analysis, it was still worthwhile going, mostly for being able to see what the commercial “competition” is up to. When we started with LiDAR Viewer development, and presented it for the first time at the 2005 AGU fall meeting, nobody else was really talking about LiDAR at all. In later years, nobody else was talking seriously about working with the raw 3D point clouds instead of derived digital elevation models. At ILMF, everybody was heavily advertising point cloud viewers. It was reassuring to see that almost all commercial viewers are lagging years behind LiDAR Viewer’s capabilities. Most are completely in-core, meaning they can only show small subsets of large survey data sets at a time; some have hotglued-on roaming features, such as one viewer (VrLiDAR) that automatically replaces the current subset with an adjacent one if the 3D cursor leaves the current subset boundaries (which is not seamless at all). None showed (but one claimed) real-time illumination (in LiDAR Viewer since December 2008), and none do surface-aligned splatting (in LiDAR Viewer 3.0 preview since last week).

Just because I can, here is a very old (vintage 2006) video showing LiDAR Viewer version 1.0 in a CAVE:

There were only two commercial implementations that had true out-of-core abilities, and one of them was only present in vaporware form. The first was the very impressive Geoverse viewer by Euclideon. It uses an octree for out-of-core point cloud visualization, level-of-detail rendering, and looks and feels more or less exactly like LiDAR Viewer. On the upside, it has a more efficient background node loader, meaning that the multi-resolution rendering is less obvious than in LiDAR Viewer. On the downside, it doesn’t do real-time illumination, and only uses “fat pixels” instead of surface-aligned splats to attempt to fill in surfaces. And Euclidean was able to show off some very cool data sets, such as a 6 billion point terrestrial scan of the interior and exterior of a crumbling WWII bunker. Now that would have looked extremely amazing with illumination and splatting. As it was, the fat pixel renderer made the interior look quite ethereal instead of concrete (pun intended).

The other, vaporware, viewer was Terrasolid‘s TerraStereo, which doesn’t even appear on their web site yet. The company rep explained that he couldn’t demonstrate it because the computer they had taken along for the booth couldn’t handle it. Of course. TerraStereo’s marketing materials claim pretty much exactly the same feature list as LiDAR Viewer’s, minus surface-aligned splat rendering (which in all honesty is somewhat vaporware-y right now as well). The pictures in the brochure could just as well be screen shots from LiDAR Viewer. Parallel evolution and all.

But from an immersive visualization point of view, we still appear to be the only game in town. Three products, VrLiDAR, TerraStereo, and Geoverse, do stereoscopic rendering, but neither support anything besides desktop stereo, or any 3D input devices. Let alone cluster rendering or true holographic displays. I talked about immersive visualization in detail in my presentation, but didn’t get much feedback related to that. There was a good amount of interest into LiDAR Viewer’s real-time illumination and the splatting renderer, though. We’ll see what’ll come of that.

So the bottom line is: point cloud visualization has finally arrived in the mainstream; visualization of truly large data sets not quite yet. Stereo is appearing on the horizon, but immersive visualization and holographic displays are still science fiction to this community. We’ll have to keep working on that.

1 thought on “On the road for VR (sort of…): ILMF ’13, Denver, CO

  1. Pingback: Meet the LiDAR Viewer | Doc-Ok.org

Leave a Reply

Your email address will not be published. Required fields are marked *