Forum Replies Created
Thank you for the bug report. I fixed the Kinect and SARndbox packages. Please pull both of them again:
PullPackage Kinect PullPackage SARndbox
and it should work without errors.
There’s no need to use an external screen capturing application, or even Vrui’s Screenshot tool (which is actually meant to take “photos” inside 3D VR environments).
Vrui has built-in screenshot capability. Simply press “LeftWin+PrintScreen” and it will exactly save the contents of the focused window as a PNG file in the current directory.
Yes, the version of the C++ standard library on your computer is too old. The code uses a function that was added to that library fairly recently.
You should put it at a height so that its depth camera exactly sees the interior of the sandbox. You can check that via the live camera feed in RawKinectViewer.
Based on the fixed field-of-view of the depth camera, the optimal height is going to be a bit more than the width of the sandbox.
Depends on the level. Elementary school? High school? University?
When I have students over, I generally start out by talking about topographic maps, and how they relate to the actual 3D toopgraphy. I talk about elevation color mapping, and then contour lines, and how you would not gain/lose any elevation when walking along them on a hike.
When talking about contour lines, I use the opportunity to talk about steepest descent / gradient, and how water generally flows at a right angle to contour lines. I pick a spot on a hill, and let students guess where a drop of water would flow from there.
Then I pivot from there into water flow, and point out how water will generally flow downhill, but not always due to the momentum it picks up when flowing. I put a trench at the bottom of the hill, and then show how water flowing down the hill “jumps” the opposite edge of the trench. That lets me get into levees and flood control and how engineers need to take water momentum into account when designing levees. I like to build a high reservoir, fill it with water, make a dammed outflow channel, and then break the dam to show the water rushing out and flooding everything downstream.
Depending on the students’ level, I also talk about wave propagation. I build a large shallow lake and let the water surface come to rest. At that point the noise from the 3D camera creates creates small waves on the surface, and I ask the students what those are. I can then talk about how those are due to small movements of the senses 3D terrain, and mention they are due to measurement noise, but that they correspond to tiny earthquakes in the real world, and that the waves they see are essentially mini-tsunamis. I can then explain how the waves expand and interfere with each other. If I was careful enough to build the lake with one deeper end and a shallow shore on the other side, I can demonstrate refraction, where the waves approaching the shallow shore bend towards the shore so that they always almost hit it at a right angle. From that I can draw the parallel to refraction of light waves.
I also like to use the “lava” function to talk about how different fluids behave differently. I make a mountain, let lava flow over it, and show how the lava oozes down the hill and sticks to the hill. I then change back to water on-the-fly and show how it immediately behaves very differently, and use that to talk about viscosity and how it influences flow.
It’s a really loose script. I also like making big lakes and dropping a handful of sand in there to make a big tsunami. Or simulate a landslide on one shore and the resulting wave (which is of local interest due to the history of Lake Tahoe).
As of recently, I’ve been using the bedding plane function to talk about geology, as in tectonic uplift of sedimentary layers and how to measure/predict subsurface structures from surface observations. The students see the red layer intersecting the surface, and I challenge them to imagine that the curvy red line they see is actually a flat surface. When they inevitably have a hard time with that, I ask them to stand in the right spot so that their eyes are inside the 3D extension of the subsurface plane, and then they get it when the red curve turns into a red straight line as if by magic.
The current Kinect package only supports older first-generation RealSense cameras. Intel completely changed the way how software has to talk to the second-generation cameras (415 and 435).
I’ve tried a 435 camera and found it worked worse for the AR Sandbox than the original Kinect, so I didn’t prioritize writing a driver for it. We are currently looking into driver support for newer 3D cameras.
Let me dig up information about the DEM mode (where the sandbox guides you towards re-creating a 3D terrain model) and the water/lava/etc. shader magic from the old forum that went down.
In the meantime, here’s a list of AR Sandbox functionality that I may elaborate on in later posts:
- Basic topography mode (elevation colors, contour lines, etc.)
- Change elevation color map
- Change contour interval (vertical distance between contour lines)
- Change topography update latency
- Pause terrain updates
- DEM mode (load elevation model; visualize difference between sand and model)
- Save current sand surface as elevation model
- Water mode
- Global rain via key
- Global evaporation via key
- Mouse-driven rain/evaporation
- Rain through hand gesture
- Water simluation speed factor
- Change attenuation (roughly viscosity)
- Change water shader for different visual effects
- Geology visualization
- Draw sub-surface layer with arbitrary strike, pitch, elevation, and thickness
Yes, the AR Sandbox’s water simulation does typically not work unless there is a discrete GPU to run the necessary calculations.
The other errors you were seeing (permission problems when accessing Kinect and needing to run SARndbox with sudo, calibration file not existing, etc.) are due to something going awry during software installation. Please follow the steps in the software installation instructions carefully.
When it freezes, does it wake up again when you move the mouse or press a key? What kind of OS and desktop environment are you running? It might be that the OS puts the desktop in some form of low-power mode when it doesn’t detect user input for a while. There is code in the SARndbox application to prevent that, but some desktop systems might ignore that.
Does water work when you add it globally? Press and hold some key, e.g., “1”, and move the mouse to highlight “Manage Water” in the tool selection menu that pops up. Then release the key you pressed. Then press and release a second key, e.g., “2”.
This will bind a water tool to the two keys you chose. Now press and hold the first key to add water everywhere.
- Basic topography mode (elevation colors, contour lines, etc.)