Forum Replies Created
-
AuthorPosts
-
okreylosKeymaster
The arsandbox.org site is dead. I am hosting the current official version of the world map on my web page, https://web.cs.ucdavis.edu/~okreylos/ResDev/SARndbox/LinkWorldMap.html, with instructions on how to add your AR Sandbox. I can’t update the map right now due to IT issues, but am still collecting new locations.
okreylosKeymasterThis is usually a connection problem. First, check all the connections between the Kinect and PC. The power adapter cable is sometimes temperamental. Then, try plugging your Kinect into different USB ports on the PC. If none of that helps, try if you can find another power adapter cable.
The above steps typically fix the problem.
okreylosKeymasterUnfortunately, we cannot calibrate our Epson EB-585Wi projector as shown in the video from 10:10.
Can you elaborate on the calibration problem you’re having? Calibration should be independent of the precise projector model.
okreylosKeymasterThis is probably related to you using a Kinect V2. While the software supports it, I haven’t done much testing with it, and it might require configuration changes.
You could do an experiment: Cut a hand shape out of a piece of cardboard (use your own hand as a template) and glue it to a stick. Then hold the cardboard hand above the sandbox, so that it’s at a right angle to the camera, meaning horizontal if the camera is mounted looking straight down, and see if you can make it rain that way. There might be an issue with depth tolerances in the hand detector, due to the Kinect V2’s higher depth resolution.
okreylosKeymasterIt’s possible that it may work, but I have never tested the code on any AMD graphics cards (I don’t have any). The reason I recommend Nvidia graphics cards is that their Linux drivers are superb. AMD’s drivers have always had problems in the past, so I cannot make any promises.
okreylosKeymasterI try to run the sandbox it tries to call the serial number of the old Kinect.
That should not happen; the AR Sandbox code only refers to the Kinect camera by index, not by serial number. What is the exact message you are getting?
The issue is more likely that you don’t have a calibration data file for the new camera. You need to run
sudo KinectUtil getCalib 0
which was broken in version 4.1 of the Kinect package. I just released version 4.2 with a fix; if you used PullPackage to get the software, simply re-run
PullPackage Kinect
to install it.
okreylosKeymasterThe “ran out of time by …” messages mean that the graphics card was not able to keep up with the requirements of the water simulation. This is not fatal by itself, it just means that there is a momentary slow-down in the visible speed of water propagation in the sandbox. This issue should not cause stability problems, lock-ups, or crashes.
The
USB::Device::writecontrol: Device has been disconnected
error is serious, it means that there is a connection problem with the Kinect camera. Maybe the camera itself is bad, or the cable or power supply are not up to spec, or the PCs USB ports have issues.Either way, this is a hardware problem.
First I would try plugging the Kinect into different USB ports on the host PC. If that does not improve stability, I would try replacing the camera’s power / USB adapter cable. Those often seem to have issues. If that doesn’t help, either, I would try to find a different Kinect camera.
For comparison, the AR Sandbox in UC Davis DataLab has been running continuously (no application restarts) for about three months now, with no issues.
okreylosKeymasterI fixed the underlying issue that caused
KinectUtil getCalib
to fail. I pushed the changes to the PullPackage repository, meaning to apply the fix, simply re-runPullPackage Kinect
and continue the installation procedure as usual.
okreylosKeymasterI fixed this problem for real. The issue was a confusion between the units in which reply sizes are returned by the USB library. The USB library returns sizes in bytes, while the higher-level functions in the Kinect package assumed that sizes would be returned in USB words. Hence, the expected sizes didn’t match the actual sizes.
After fixing the underlying issue in Kinect::Camera, KinectUtil getCalib is working again.
I pushed the changes to version 4.2 of the Kinect package and made that the default version pulled by
PullPackage Kinect
In short, the fix is to re-run
PullPackage Kinect
which will insert the corrected package into the existing software stack.okreylosKeymasterI am seeing the issue reports. A core problem seems to be that
PullPackage Vrui
does not install some optional libraries (libtiff, libpng, …), and that later code assumes that those libraries are there. That’s a bug.I am working on the root problem. In the meantime, I updated the
PullPackage Vrui
command to hopefully pull those packages successfully. Re-run that command and see if it works.okreylosKeymasterThank you for the bug report. I fixed the Kinect and SARndbox packages. Please pull both of them again:
PullPackage Kinect PullPackage SARndbox
and it should work without errors.
okreylosKeymasterThere’s no need to use an external screen capturing application, or even Vrui’s Screenshot tool (which is actually meant to take “photos” inside 3D VR environments).
Vrui has built-in screenshot capability. Simply press “LeftWin+PrintScreen” and it will exactly save the contents of the focused window as a PNG file in the current directory.
okreylosKeymasterYes, the version of the C++ standard library on your computer is too old. The code uses a function that was added to that library fairly recently.
okreylosKeymasterYou should put it at a height so that its depth camera exactly sees the interior of the sandbox. You can check that via the live camera feed in RawKinectViewer.
Based on the fixed field-of-view of the depth camera, the optimal height is going to be a bit more than the width of the sandbox.
okreylosKeymasterDepends on the level. Elementary school? High school? University?
When I have students over, I generally start out by talking about topographic maps, and how they relate to the actual 3D toopgraphy. I talk about elevation color mapping, and then contour lines, and how you would not gain/lose any elevation when walking along them on a hike.
When talking about contour lines, I use the opportunity to talk about steepest descent / gradient, and how water generally flows at a right angle to contour lines. I pick a spot on a hill, and let students guess where a drop of water would flow from there.
Then I pivot from there into water flow, and point out how water will generally flow downhill, but not always due to the momentum it picks up when flowing. I put a trench at the bottom of the hill, and then show how water flowing down the hill “jumps” the opposite edge of the trench. That lets me get into levees and flood control and how engineers need to take water momentum into account when designing levees. I like to build a high reservoir, fill it with water, make a dammed outflow channel, and then break the dam to show the water rushing out and flooding everything downstream.
Depending on the students’ level, I also talk about wave propagation. I build a large shallow lake and let the water surface come to rest. At that point the noise from the 3D camera creates creates small waves on the surface, and I ask the students what those are. I can then talk about how those are due to small movements of the senses 3D terrain, and mention they are due to measurement noise, but that they correspond to tiny earthquakes in the real world, and that the waves they see are essentially mini-tsunamis. I can then explain how the waves expand and interfere with each other. If I was careful enough to build the lake with one deeper end and a shallow shore on the other side, I can demonstrate refraction, where the waves approaching the shallow shore bend towards the shore so that they always almost hit it at a right angle. From that I can draw the parallel to refraction of light waves.
I also like to use the “lava” function to talk about how different fluids behave differently. I make a mountain, let lava flow over it, and show how the lava oozes down the hill and sticks to the hill. I then change back to water on-the-fly and show how it immediately behaves very differently, and use that to talk about viscosity and how it influences flow.
It’s a really loose script. I also like making big lakes and dropping a handful of sand in there to make a big tsunami. Or simulate a landslide on one shore and the resulting wave (which is of local interest due to the history of Lake Tahoe).
As of recently, I’ve been using the bedding plane function to talk about geology, as in tectonic uplift of sedimentary layers and how to measure/predict subsurface structures from surface observations. The students see the red layer intersecting the surface, and I challenge them to imagine that the curvy red line they see is actually a flat surface. When they inevitably have a hard time with that, I ask them to stand in the right spot so that their eyes are inside the 3D extension of the subsurface plane, and then they get it when the red curve turns into a red straight line as if by magic.
-
AuthorPosts