Update: There is now an AR Sandbox support forum with detailed complete installation instructions starting from a blank/new PC, and a video showing a walk-through of same instructions. You’re welcome to read the rest of this article for context and background information, but please ignore the outdated hardware recommendations and installation instructions below. Instead, use the up-to-date hardware recommendations from the AR Sandbox project page, and follow the instructions linked above.
Earlier this year, I branched out into augmented reality (AR) to build an AR Sandbox:
I am involved in an NSF-funded project on informal science education for lake ecosystems, and while my primary part in that project is creating visualization software to drive 3D displays for larger audiences, creating a hands-on exhibit combining a real sandbox with a 3D camera, a digital projector, and a powerful computer seemed like a good idea at the time. I didn’t invent this from whole cloth; the project got started when I saw a video of such a system done by a group of Czech students on YouTube. I only improved on that design by adding better filters, topographic contour lines, and a physically correct water flow simulation.
The idea is to have these AR sandboxes as more or less unsupervised hands-on exhibits in science museums, and allow visitors to informally learn about geographical, geological, and hydrological principles by playing with sand. The above-mentioned NSF project has three participating sites: the UC Davis Tahoe Environmental Research Center, the Lawrence Hall of Science, and the ECHO Lake Aquarium and Science Center. The plan is to take the current prototype sandbox, turn it into a more robust, museum-worthy exhibit (with help from the exhibit designers at the San Francisco Exploratorium), and install one sandbox each at the three sites.
But since I published the video shown above on YouTube, where it went viral and gathered around 1.5 million views, there has been a lot of interest from other museums, colleges, high schools, and private enthusiasts to build their own versions of the AR sandbox using our software. Fortunately, the software itself is freely available and runs under Linux and Mac OS X, and all the hardware components are available off-the-shelf. One only needs a Kinect 3D camera, a data projector, a recent-model PC with a good graphics card (Nvidia GeForce 480 et al. to run the water simulation, or pretty much anything with water turned off) — and an actual sandbox, of course.
In order to assist do-it-yourself efforts, I’ve recently created a series of videos illustrating the core steps necessary to add the AR component to an already existing sandbox. There are three main steps: two to calibrate the Kinect 3D camera with respect to the sandbox, and one to calibrate the data projector with respect to the Kinect 3D camera (and, by extension, the sandbox). These videos elaborate on steps described in words in the AR Sandbox software’s README file, but sometimes videos are worth more than words. In order, these calibration steps are:
- Step 2 (optional, but recommended): Internally calibrate the Kinect camera and then check the calibration result
- Step 4: Calculate sandbox base plane
- Step 5: Measure 3D extents of sand surface
- Step 7: Calibrate projector with respect to Kinect 3D camera
Step 1 is optional and will get a video as time permits, and steps 3, 6, and 8 are better explained in words.
Important update: when running the SARndbox application, don’t forget to add the -fpv (“fix projector view”) command line argument. Without it, the SARndbox won’t use the projector calibration matrix that you so carefully calibrated in step 7. It’s in the README file, but apparently nobody ever reads that. 😉
The only component that’s completely left up to each implementer is the sandbox itself. Since it’s literally just a box of sand with a camera and projector hanging above, and since its exact layout depends a lot on its intended environment, I am not providing any diagrams or blueprints at this point, except a few photos of our prototype system.
Basically, if you already own a fairly recent PC, a Kinect, and a data projector, knock yourself out! It should be possible to jury-rig a working system in a matter of hours (add 30 minutes if you need to install Linux first). It’s fun for the whole family!
Pingback: Build your own Augmented Reality Sandbox | Doc-Ok.org
Pingback: … and they did! | Doc-Ok.org
Hi, Please help me. How i launch simulated water on AR Sandbox please help. Regards Saina
There is nothing special to enable water in the sandbox software; it’s on by default. You’ll need to set up a water simulation bounding box (part of the BoxLayout.txt file), as described in the documentation. To test the water simulation — rain can be a bit finicky — create a water tool. Press any key (say ‘1’), and move the mouse while holding that key to the “Manage Water” menu item, then release the key. When the dialog box pops up, press another key, say ‘2’. Then, when you press (and hold) ‘1’, the sandbox will be flooded with water; if you press (and hold) ‘2’, the sandbox will be drained.
The biggest practical issue is graphics driver support. You need a powerful discrete graphics card, something like an Nvidia GeForce 480, and the vendor-supplied binary drivers, to run the shaders necessary for water simulation. There are good tutorials on installing the drivers for a variety of Linux distributions online.
My setup seems to not be using the GPU. Using either a GTX650Ti or a Quadro K4000.
nvidia-smi shows No Running Compute Processes found. Any ideas?
The GPU water simulation code is based on GLSL shaders, not on CUDA or other GPGPU packages. I don’t know anything about nvidia-smi, but that’s probably why nothing shows up. If the water simulation were running on your CPU, you’d get maybe one new frame every 10 seconds or so.
Another way to check is to use the nvidia-settings utility, click on the tab for your GPU (“GPU 0 – (GeForce GTX 680″ in my case), and look at the”GPU Utilization” value. When the water simulation code is running, it should be anywhere between 20% and 100%.
My problem was solved. Box has been up and running for a bit now. Done many demos. Adults and children really enjoy it. GINA UAF ALASKA
Oh, it’s you! I missed this comment originally and just saw it yesterday. I have a picture of your AR Sandbox on my external installations page, with a link to the article in your newspaper. Do you have a more appropriate / permanent web page to which I can link?
Hello, I have this message:
[root@localhost SARndbox-1.5]# ./bin/SARndbox -c 0
0.155481 x 0.15066
Caught exception Sandbox: Not all required extensions are supported by local OpenGL while initializing rendering windows
can you help me?
(it is with CentOS-Linux)
Thank you
That’s most probably a graphics driver issue. Check that you have the proper vendor-supplied (for Nvidia or ATM/ATI) binary drivers installed.
Pingback: AR Sandbox Support Forum | Doc-Ok.org
Hi,
This is really amazing, I want to build a sandbox in a large format 4x3m for a light festival. Is there any limitations to the size of the kinects field of view?
best regards
Edvin
4x3m is much too large to be useful. The fundamental problem is the Kinect’s limited field-of-view and resolution. To cover a 4x3m sandbox, the Kinect would have to be mounted 4m above the sandbox, and 4m is way outside its useable distance range. A 2mx1.5m sandbox would already be very low-resolution. 1×0.75m is about the ideal size.
At the moment the only feasible approach for large-area sandboxes would be multiple Kinects, but that would be a major installation and maintenance problem, and the software doesn’t support it, either.
hi, im working on building this and i would appreciate some measurements, the height of the sandbox (depth as well) and the height of the metal pole where the projector/kinect are hung.
We found the ideal AR Sandbox size is 40″x30″, which places the Kinect (and the projector) approximately 40″ above the intended average sand surface.
How deep do you have the box, and how deep in the box does your sand average? How much sand do you end up using (like, how many bags do you buy)? Thanks!
Those details are in the installation instructions.
THANK YOU THANK YOU THANK YOU!!!
I saw that a couple of months ago, and for some reason couldn’t find it again for the life of me!
Ok sorry to bother again but…I don’t see anything about how deep the box itself should be. With 4″ of sand, I’d imagine you’d want *at least* 6″ for the box, but some of the videos I’ve seen appear to be a bit deeper. Can you give any insight for that? Thanks!
There is no clear rule. I think our sandbox enclosure is 8″ deep, but I would have to go and measure it.
Hi! I’ve compiled all the programs and having the system almost working. Here I’m making my test on the wall and having some issues. Can you answer me some questions?
1- When I run the sandbox (there’s no object on the wall), I don’t obtain the green textured image but a running simulation instead with the water and all environment. What you think would be doing this?
2- About the BoxLayout file, can you explain me about the coordinates system and how modifications in it reflect on sandbox simulation?
3- How can be a object from the wall? When i put my hand close the wall, it is recognized in the simulation but if a put something thicker (about 8cm) it start the rain simulation
I’ll put here some images from calibration and the simulation running.
http://imageshack.com/a/img537/2954/EyOQA6.png
http://imageshack.com/a/img538/2831/wztoGI.png (Here in Step 5 My reconstruction always be Over or behind the axis, i didn’t manage to put it between the two coordinates axis like in the video )
http://imageshack.com/a/img912/8937/tcojHD.png
http://imageshack.com/a/img633/8228/JV5CRG.png
Congratulations for your great work! I can’t wait to start it here
1: Without setting up a BoxLayout.txt file, the Sandbox software does not know what the zero-level plane relative to the camera is. The result will be some more or less random topography model.
2: The first line sets up the geometric plane equation for the zero-level plane relative to the camera, in camera units (can be mm or cm, depending on camera model, in the current software version). The first three numbers define the plane’s orientation as a normal vector. For example, if the camera points straight down, then the vector should be (0, 0, 1) meaning that the terrain’s “up” direction points along the camera’s Z axis. If the camera points down at a 45 degree angle, the vector should be (0, 0.7071, 0.7071), etc. The fourth number is the distance from the zero-level plane to the camera coordinate system’s origin, which is the lens center of the (virtual) depth camera, in camera units. For example, if the camera points straight down, i.e., the normal vector is (0, 0, 1), and the camera is 120cm above the desired zero level, then the fourth number would be -120.0 (negative because the camera looks along the negative Z axis).
During calibration, you determine the plane equation as described, resulting in a distance value in the appropriate unit, i.e., centimeters or millimeters. You can then adjust the distance value to move the zero-level plane higher or lower, as convenient for your specific setup.
3: The sandbox has a range of valid topographic elevation values, by default the range of the elevation color map. The software ignores surfaces outside that range. Similarly, there is a valid range for “rain objects.” Both ranges are relative to the zero-level plane defined in BoxLayout.txt, using the camera’s native measurement unit. The BoxLayout.txt file and height color map shipped with the SARndbox software are in centimeters. If your camera uses millimeters (check the distance value in your plane equation), you will have to multiply all elevation values in the color map file by ten.
Looking at your pictures, your camera works in millimeters. Do the adjustments I mention above, and also run “SARndbox -h” to see a full list of command line options. You need to pass some non-default values to make the software work well, such as a different contour line interval.
Very thanks! You helped me a lot! I will change the settings here and tell you the results i get.
Here’s a video of the simulation running (I made this video before this post).
Hi! Think I got this to work now. I did the changes you told me:
1- In BoxLayout, I multiplied all the box coordinates (only the first line of the file) by 10.
2- The default value of argument was 0.75. I change it to 7.5 when running the program, using -cls 7.5
3- In HeightColorMap I multiplied only the first columm by 10. When I multiplied all the values by then the colors become strange. I’ll put here the pictures.
* HeightColorMap with all values multiplied by ten:
http://imagizer.imageshack.us/a/img633/1051/eKMfo7.png
* With only first column multiplied:
http://imageshack.com/a/img661/3143/IZPIzX.png
Very thanks again! And congratulation for this great work!
2- The default value of *contour line spacing* argument was 0.75. I change it to 7.5 when running the program, using -cls 7.5
My full line of execution is:
primusrun ./SARndbox -cls 7.5 -uhm HeightColorMap.cpt
The 2nd to 4th columns in a color map file are RGB color components, so no, don’t multiply those by 10. 🙂
Glad that it’s working now.
I’m not using projector and skipped the steps 1 and 2.
When a run the program the message “3.04204 x 3.0091” appear
That’s the simulation cell size, and confirms that your camera works in millimeters. You will have to change some default settings.
hey , i want to learn how to built my own sand box,i mean i want to make my own application. could you help me please.
Go to the AR Sandbox project page, and visit the AR Sandbox community forum.
Hello! I’ve tried to access the Sandbox project page multiple times and I keep getting a “this webpage is not available” message.
I am going to this site; http://idav.ucdavis.edu/~okreylos/ResDev/SARndbox
Any clue as to why I cannot access the page?
Looks like our web servers are down. They should be back up in a little while.
Wonderful. My other question is if you think this is a feasible task for someone that has never done anything like this before. I don’t have experience with computer systems or programming. Is there enough direction in your videos and discussions that I could manage to build one of these? Or would I need someone that has experience with programming?
Hard to say for me, but it should be fine. There’s no programming knowledge required to set up an AR Sandbox, but you’ll have to be willing to work inside a terminal window, and ideally to install Ubuntu Linux.
We have detailed installation instructions on the AR Sandbox Forum; you can check those out and see if they look doable.
No, it is probably not doable for someone with no experience in Linux. The instructions will say things like “Open a text editor and type blah blah blah.” If you don’t know how to open a text editor in Linux, you won’t be able to do this. In Linux, you cannot just double click the text editor. You have to type something like
“cd Sandbox
sudo nano master.txt
ctrl-O
ctrl-X” just to open the text editor and save the file. If that text didn’t make any sense, then you’re going to have a very tough time.
“In Linux, you cannot just double click the text editor.”
That has not been entirely accurate since about 1998 or so. In Linux Mint, for example, there is an icon labeled “Text Editor” in the main menu, which you can double-click to open a text editor. You can then open a file using a file selection dialog.
Yes you’re right. It is very confusing to me. I’m typing in stuff and I don’t understand what it means. Then people online are saying type this and that for various functions which are not indicated in the manual.
Anybody tried running this on a raspberry Pi (with water turned off of course)
Seems to work: Use of Raspberry Pi for the computer.
Just wondering about costs, approx is fine 🙂 …I want to create a budget for myself and set a goal to save towards. I am blown away by this project and would really love to make one!!
Short-throw projector around USD 600, first-generation Kinect around USD 100. If you already have a computer you can use, the rest is a bunch of plywood, two-by-fours, duct tape, baling wire, and a few buckets of sand. If you want to run the water simulation in a responsive way, you’ll have to invest around USD 300 into a good gaming graphics card, such as an Nvidia GeForce GTX 770.
Your generosity is infinite. Thanks for sharing! Do you install? For a system on an mountain ridge eco reserve in Costa Rica?
We don’t have staff to do on-site installations, but it’s not that hard to do-it-yourself.
Got it. We’ll give it a try. Wondering how I can tailor the application to compliment the rainforest environment and our eco approach. Any ideas?
That’s a good question. We haven’t thought about it in that context, and it’s probably not directly applicable as it is, but one potential addition we’ve discussed was a simplified simulation of vegetation growth in response to water availability. It’s a pretty big piece of work, though.
That would be useful for a site like this. It is for an eco reserve retreat center. Sitting on 250 acres of primary rainforest. here is our website: http://www.samasati.com . Perhaps a simulation that can help us to better manage our water resources would be helpful. We have been catching water on site for almost 20 years now.
hei, first up: thanks for your wonderful project!
I`m in the middle of setting up my own sandbox for a school project, but unfortunatelly the server is still down.
any news on when its going online again?
It’s going through file system check right now.
The site that was hosting your code seems to have taken it offline. The URL is returning a 404 now. Is there a mirror somewhere else? Tks!
I think someone uploaded it to github somewhere, but I don’t have a link right now. Web site should be back up soon.
Hy i saw this project right now.. and i am super excited to try this by my own… 🙂
but the problem is that the link :http://idav.ucdavis.edu/~okreylos/ResDev/SARndbox
Is no more avalible can you plz help me to get this software and any advice plz
thanks !
Pingback: From-scratch AR Sandbox Software Installation | Knowledge
Everything was going fine until I ran into this problem
I think I got the same problem typing the following, I get connected but then get a 404 unavailable error
wget http://idav.ucdavis.edu/~okreylos/ResDev/SARndbox/SARndbox-1.5-001.tar.gz
The web server was down two weeks ago, but I just clicked on your link and it downloads fine.
Hi. I managed to build a sandbox by your description – thank you for the tips! I tried to register for the sandbox forum but did not receive an email with my password – help please (via e-mail). Greetings from Poland.
Hi, my engineering class and I have recently built an AR Sandbox and we are really enjoying it! I was just wondering if it was possible to change the color of contour lines to make them seem more visible?
You can set the color by editing line 96 of the SurfaceAddContourLines.fs shader file, which is located in the share/SARndbox-<version>/Shaders/ subdirectory of the SARndbox source directory:
baseColor=baseColor=vec4(0.0,0.0,0.0,1.0);
The first three numbers are the red, green, and blue color components, respectively, normalized to [0.0, 1.0]. After editing the file, you need to restart the SARndbox application to see the change.
Hi, first off thank you for the support you have been giving people, it has been a really fun project to work on!
So my sandbox is built and calibrated, the red x was perfect in high/low positions across the sandbox when running calib. however I think the plane equation I have is wrong or the file is in the wrong location. If you were looking from where the kinect is, the bottom of the sandbox has the water level much much higher than at the top. please see photos below.
http://imgur.com/EAsu7iX,aOjwAxO,InR4Itd,PBIDwQy#0
first pic is boxlayout.txt next 3 are sandbox, sorry for rotated images. but you can see what I mean about water super high on bottom and really low towards top. From your above post I adjusted the first 3 numbers and 4th in the plane equation hoping to correct it, ie the (0,1,1) method described above, but of the 50 different adjustments I made nothing changed at all when relaunching the sandbox software, I am wondering if I have the boxlayout.txt in the wrong location hence the changes not reflecting in the sandbox? its current path is
~/src/kinect-2.8-001/kinect/boxlayout.txt
everything is installed under the src folder..
once I get the plane level with the sandbox how do you raise and lower the water level (sea level)? like what file is that and what line do you adjust sea level.
and lastly I am making this to simulate lava flows so I am wanting to use a color scheme of grays and change the water to orange/yellow for lava. I saw your post above about changing contour line colors which I tested and it works. Can you elaboration on what file you have to edit and line to adjust colors for terrain elevations? I would be really appreciate it.
Thank you so much for all the work you have done on this.
Josh
ok, I re-read the above posts and searched for boxlayout.txt and found it in another location, I copied the good data into that file and now the plane is no longer tilted. yay!
now I have been in heightcolormap file for about an hour messing with elevations and colors and have made my volcano file and an island/mountain file. yay
So I guess my only question is where is the setting to change the color of the water for the lava, like what file and line?? I’m pretty sure i read through this and did not fine that info, once again thank you for your time.
Hi
I am new here and before i go ahead and possibly make one of these, I want to know
1). Is it possible to do some programming on the project, I am thinking of an idea and want to know what language I can use to build on the system or what language it is written in.
2). Are there any plans to create one for the X Box ONE
Thanks
Greg
1) The software is written in C++.
2) I have a basic driver for Xbox One Kinects, but it is not yet integrated into the software stack.
okay thanks for the reply, I know Java and Python, guess I will have to start learning C++ on my off days from work and while building the Sandbox. Will be back with more questions then no doubt.
Thanks
Greg
Hi forum,
I find this very interesting and would like to experiment it to design some of my tutorials.
However, to be more specific and to avoid any hiccups, I would be grateful if I can be recommended with some specifications for the below.
1. 3D Camera (model, approx cost)
2. Projector (model, approx cost)
3. Sand (is it a special type of sand?, if yes where can I buy it? cost?)
Thanking you
Sandeep
Find all your answers on the project’s instructions page.
I’m working to put together specs for a PC to run the software. Would there be any issues with utilizing a Quadro card as opposed to a GeForce card?
No, the Quadro will work fine, but you most probably won’t get any benefits from it, either. Bottom line, you’ll be paying ten times as much for the same result. Unless you already own a Quadro, of course.
What aspects of the video card is the software utilizing? Is it the amount of VRAM on the card, the bus size (128-bit, 192-bit, 256-bit, etc..), core speed, memory speed? I read a comparative blog you wrote up between the variance between the Quadro and GeForce cards. I work for the City and the department I support is wanting to set up an exhibit. They’ve given me a budget to work with that I’m trying to keep within. The hard part is they want it to run from a laptop, so that everything is self-contained within the exhibit without the need for an external monitor needing to be setup each time calibration is needing to be done. Obviously this makes it slightly more expensive since laptops are more costly than desktop setups. My vendors we utilize unfortunately can’t quote me pricing for consumer grade laptops, only business grade and the closest to the specs mentioned for the sandbox would be engineering grade laptop with a Quadro or Firepro card in it.
So I’m just trying to understand a little bit about the specs and the utilization of the hardware by it, so I can properly assess which hardware to purchase.
Thanks,
It’s hard to break it down by specs; only a benchmark can really tell how each individual system performs. As a rough guide, the most important criteria are number of CUDA cores and core clock. The code has to do a certain number of calculations per second, and the number it can do is roughly clock speed times number of cores (assuming that the cores are identical between systems, which they happen to be between Quadro and GeForce). VRAM is a binary threshold: you need a certain amount to represent the simulation state in memory. If you have less VRAM, it won’t work at all; if you have more VRAM than you need, it won’t improve performance at all. The AR Sandbox uses less than 2GB of VRAM with standard settings (probably much less than 2GB, but I don’t have exact numbers at hand).
Would you be able to buy from Dell, specifically a gaming laptop from their Alienware line (this one here seems pretty high up in specs, for $1500), or from MSI or Acer? You could get one with a high-end latest-generation mobile GeForce. I think the most expensive ones run up to $2000, probably still less than a business laptop.
I understand that you work under certain constraints, but in general I advise against using a laptop to run the sandbox. Not only do you pay much more for the same performance, but you might also run into reliability issues such as overheating. There are a couple of users on the AR Sandbox Forum who use laptops. You could ask there for their experiences.
Thanks for the info. I think this definitely helps me better understand what I need.
Dell is actually our primary computer vendor. They can’t quote be a business price as their Alienware computers are consumer grade and wanted to quote me pricing for the Precision line. The issue I was running into especially now knowing about more about the software, is how the cores are calculated between the two cards. Comparably the 970M GeForce 1280 CUDA Cores and 3GB VRAM, whereas lets say the Quadro k2100M has only 576 Parallel CUDA Cores. Is the parallel nature on the Quadro cards the equivalent of say (576 * 2) or are you literally looking at less than half the amount of CUDA cores on that card? Cause to get close to the same amount of CUDA cores on a Quadro you are looking more along the lines of the K4100M and roughly about $2200 laptop instead of $1500-1700.
Do you know how that works between “Parallel cores” on the Quadro cards and the regular cores on the GeForce cards?
They can’t quote be a business price as their Alienware computers are consumer grade and wanted to quote me pricing for the Precision line.
Jeez.
are you literally looking at less than half the amount of CUDA cores on that card
That’s right, it’s just a phrasing difference. All GPU cores are parallel. A Quadro 2100M should be less than half the performance of a GeForce 970M, as I think it also has a lower core clock. In general, Quadros and GeForces are the same GPU architecture, just aimed at different markets. The lowest-level Quadros will be comparable to lowest-level GeForces, and the same is true at the highest level. A GeForce 970M is pretty high up in the mobile range, and you’d have to go to a Quadro K4100M or even K5100M to get the same computing power.
Hello, i want to do this augmented reality sandbox for a project in school, but the problem is that i don’t even know where to start. For example i don’t know if the kinect needs to be connected to the data projector or the laptop and i don’t know what cable to use. I have a Xbox kinect 3D (1414), a data projector and a macbook(i don’y know if it’s going to work with this computer) i really need your help can you contact me by email please?
You need to connect your Kinect to a USB port on your computer. The Kinect does not have a standard USB plug, but there should have been a USB adapter with an AC power plug in the box.
The AR Sandbox software should work on Mac OS X, but I haven’t tested it in a long time.
hello oliver its me, i really need your help. Do we have to connect the data projector to the computer?
Yes, you do. Ideally, you connect the projector via an HDMI cable. There are more details on the instructions page.
what’s the connector between the xbox kinect and the computer called? because i dont have one. thanxxx
It doesn’t have a specific name. The connector (from the Kinect’s non-standard plug and a plug-in DC adapter to a standard USB 2 plug) used to be packaged in the box when you bought an Xbox 360 Kinect, but that no longer seems to be the case.
I have no experience with third-party adapters, which you can buy from Amazon.com for example. However, there are several threads about that issue on the AR Sandbox support forum. It seems not all adapters work well, and at least one post on the forum has a link to a shopping page for one that works.
Hello It’s 2023 now but would you mind telling me the adaptor you used and how did whole project work on MAC? Really appreciate your help!
Can we use a Mac laptop or is it necessary a PC? And Also we have an Optoma hd720x dmd projection display the model hd70 can we use it ? You can search the model on internet
The software should work on a Mac laptop, but I haven’t tried it in a while, and performance might not be good. Your projector will be fine.
hellooo oliver! i just have an other question regarding the graphics card. Is it really necessary ? because it is quiet expensive
If you want to run the water simulation, a dedicated graphics card is definitely necessary. If you only want to run topographic shading and contour lines, you might be able to get away with using the integrated GPUs on modern Intel CPUs.
To that end, might Intel Iris (integrated graphics on CPU but allegedly improved over older hardware) be sufficient? Have you heard of anyone trying this with something like a Raspberry Pi, Intel NUC, or Compute Stick? Thanks!
There has been a lot of discussion about this on the AR Sandbox support forums. Intel NUC seems to work, but Raspberry Pi and Pi2 don’t. The latter only support OpenGL ES in hardware, therefore the AR Sandbox runs in software-emulated mode on an underpowered CPU.
What’s the cheapest model of a graphics card that I can get here in Prague ?
Voila we use one that’s already integrated in the PC?
You can, of course, try that one first and see if it works well enough.
So I suppose that the computer needs to be a recent good model because if it’s not the graphics won’t be good enough?
Because what I don’t understand is why people buy a graphic card that costs 300$ when they can simply use the one in their laptops
An integrated graphics card, typically a part of the CPU itself as in Intel’s HD graphics adapters, might be able to run the topography color and contour line components of the AR Sandbox, but it will not be able to run the water simulation. For that, you need a dedicated high-performance graphics card.
Hello it’s me again! I wanted to ask you what the calibration is for because I didn’t quite understand it. And also what is the role of the Kinect and the projector in the Ar sandbox
The Kinect captures the sand surface as a three-dimensional object, so that the computer can use it as a basis to create topographic colors, contour lines, and as ground for the water simulation. The projector’s job is to take the images created by the computer, and paint them onto the real sand surface.
Calibration is the process that lines up what the Kinect sees and what the projector paints, so that things show up in exactly the right places. For example, if you build a hill in the sandbox, the top of the hill should have a different color than the base, and there should be rings of contour lines running around the hill. If you make a small hole (like poking your finger into the sand), the sand inside the hole should have a different color.
So can we only use the topographic software? because we will not be able to have a high-level graphics card. And if I download Linux on Mac OS X will it work ?
It will not work without a high level graphics card. With a mid range gaming graphics card, you can probably do it without the virtual water, but with the water, you need about a $1,600 computer to run this. There’s no way around that, it requires massive graphics processing to accomplish.
The “dry” sandbox can run off the integrated graphics processor that’s part of Intel Core CPUs. You only need a discrete graphics card to run the water simulation, but then you want a powerful one. We recommend any brand of Nvidia GeForce GTX 970, which sell for around USD 300 right now (April 2016).
We recently assembled a new computer with the recommended specs (Intel Core i5 4690K @ 3.5 GHz, 8GB RAM, 60GB SSD, Nvidia GeForce GTX 970) for the Washington DC exhibits, for a total cost of USD 766.69 including tax and shipping from newegg.com.
Hi thank you for sharing your knowledge
I need a AR sandbox urgent in november can you tell me when i can find a company or a person can do it for me in UAE
my budget is 2500 to 3000 USD
(9000Aed to 12000Aed)
Hi Suaad, I am working on AR sandbox in Jeddah Saudi Arabia.
My budget is almost the same as yours.
I think we may help each other on this.
Hi! I’m wondering if you know if anyone has attempted (or whether it would be possible) to make a much smaller sandbox than the one I have seen (As if it were a mini zen garden on an individuals desk) using a small projector? Do you know if that’d be possible, a project half the size or smaller of the ones I am typically seeing? Thank you,
– Cassie
The primary limiting factors are camera size and scanning range and projector size. There are small projectors, but they are typically LED-based and have much lower brightness (<300 lumens vs the current projector's 3200 lumens). This might still work in a darkened room.
The Kinect camera doesn't come in a smaller form factor, and in addition it has a minimum scanning distance of approximately 0.5m. If you push it towards the sand surface as far as possible, you'd end up with a sandbox size of 0.5m x 0.375m. The smallest 3D camera I've used is Intel's RealSense R200 camera (pictured here), but it has a minimum scanning distance of about 0.7m.
Thanks, that makes sense. Since I posted that I’ve seen more videos and read more documentation where I can see the limitations the Kinect runs into when people try to do finer detail strcutures (such as pushing a 3D printed topology model underneath it)
There is a wide-angle lens adapter available for the Kinect, that allows players using it as designed to stand closer to it (for smaller playspaces). However, it can affect resolutions and sensitivity, and I have a feeling there aren’t many who’ve tried using it in this context…so you’d be blazing a new path!
There’s an issue with that. The AR Sandbox relies on 1:1 mapping between the real sand surface and the augmented reality projection, which in turn relies on undistorted, to-scale 3D reconstruction inside the Kinect camera. This reconstruction is based on known optical properties of the IR pattern emitter and the IR camera (which are calculated per-device during factory calibration). Adding third-party lenses to one and/or the other will change those properties. Pattern-based skeletal tracking, as used by Xbox Kinect games, might still sort-of kind-of work, but 3D geometry reconstruction will most probably not. Custom intrinsic calibration, as demonstrated in this video, might be able to adapt to custom lenses, but if those lenses introduce non-linear distortion (and the kind of cheap lenses used in available adapters most probably would), then all bets are off.
In short, I strongly advise against messing with the Kinect’s optics when using it with an AR Sandbox.
When it comes to throw ratios, I understand that 4:3 is ideal but I am seeing throw ratios on projectors such as .51 or .49… am I wanting to be sure it is close to 1.0 for throw ratio (Assuming I am building to the standard rather than a smaller AR Sandbox that I mused about)
Thanks for the website and the inspiration this has been an exciting day exploring all of this 🙂
Those are two different things. One is aspect ratio, the ratio between the width and height of the projected image. A 1024×768 projector has an aspect ratio of 4:3, a 1920×1080 projector has an aspect ratio of 16:9. Throw ratio, on the other hand, is the ratio between distance from the projector to the screen and width of the projected image. It determines how far away a projector has to be to create an image of a desired size.
Ideally, the throw ratio of an AR Sandbox projector should match the field-of-view of the Kinect camera, which is close to 1:1. Projectors with throw ratios close to 1 are usually referred to as “short-throw.” Standard projectors, for home theater or business applications, generally have throw ratios upwards of 1.7. The .51 or .49 numbers you are seeing are probably inverse throw ratio, i.e., image width divided by projection distance.
With a 1:1 short throw ratio projector, creating an image 40″ wide (and 30″ tall with a 4:3 aspect ratio) requires a projection distance of 40″.
Hi, i was wondering what the initial cost is to make the completed sandbox. Thanks
That depends on a lot of circumstances, but if you build everything yourself, it’s between USD 2000 and USD 3000, assuming you can’t use components you already have.
Pingback: Build An Augmented Reality Sandbox With Real-Time Topography | Lifehacker Australia
Hello,
I’ve been trying to register to the Lake Visualization forum but it isn’t sending out registration emails to me, so here goes:
I have tried extremely hard to get the projector recommended in your writeups. Unfortunately i have had third projector seller (ebay, amazon, etc.) in row that has cancelled purchase claiming they do not actually have it… I am trying to figure out what the best substitute would be but having a hard time obtaining recommended BenQ short throw… I even contacted BenQ and explained issue and they said they do not have any refurbs to sell me. What do you think my best option for alternative projector would be, and how would that change the design of the AR box?
Thank you,
– Cassie
Hello,
How do we access AR/VR mode? We are hoping to experiment with the dual-mode feature to view the terrain and water flow on a secondary display.
The AR/VR feature is in the yet-unreleased version 2.0 of the SARndbox package. With the current version you can already open multiple windows, but they will all show the same thing.
hai sir, thank you very much for sharing your knowledge with us. I just want to know is there any option for converting from water mode to lava mode.(can u please tell me the procedure)
Hello,
Thanks a lot for every things you do !!!!
First; excuse my poor english…
I would like to know if is it possible to make a sandbox with half dimensions (for all dimensions) ?
Thanks a lot for answer.
Regards.
Tibo
You can make the box smaller, but the Kinect has a minimum scanning distance of about 50cm. Meaning, if you scale all dimensions of the setup to 1/2, the sand surface will be right up to that limit, and the sandbox won’t work. You need to keep the Kinect high enough above the sand, which will lead to overscan, but it’s not a fundamental problem.
Thank you for the answer.
My english is poor but if i understand correctly, i Can but i must do some adjustment(kinect must be over 50cm)… Right ?
An other question plz : Can i use something else for sand ? The aim is to had something more transportable…
Thx a lot !
Yes, the Kinect needs to be 50cm above the highest level you want to scan, including the space where you hold your hand to make it rain.
You can use any kind of material, but it should be light-colored to best reflect the projected colors.
Thx for every things…
I wil try to make my own sandbox now !
Hello,
Firstly thank you, I have had so much fun building and running the sandbox.
We have the sandbox up and running fine but am still unsure whether some settings can be changed and if so how to do it. Is there anywhere with instructions for changing such things.
Can you change rainfall intensity?
Can you set how long it rains for or does this depend on how long the button is held?
Can the flow rate of the water be changed?
How do you turn the water to Lava, I presume this is just a colour change?
How do you change map colours?
Are there any other features that can be changed I have not thought about? it would be really interesting to know all the different things you can do and show with the sandbox.
You can change rainfall strength and many other parameters via SARndbox’s command line. Run SARndbox -h from a terminal to see all recognized options.
Rainfall duration is always how long the assigned button is pressed, or how long your hand is held above the sandbox.
There is a dialog window inside SARndbox where you can change fluid viscosity and an overall simulation time scale factor.
You can create and load custom height color maps via the command line. The color map format is very straightforward: a text file with one map entry per line: elevation relative to base plane, followed by red, green, and blue color components between 0 and 255 each.
There is a thread about how to change the water to lava somewhere on the AR Sandbox support forum.
Can you please help me how i would install the sandbox software and run it on which software I need to present it in a technology exhibition please help me
Please see the AR Sandbox project page for general instructions and a shopping list, and the AR Sandbox support forum for detailed software installation instructions.
Hey, firstly thanks you Oliver for the software and the whole project, I’m in process of building a my second sandbox, I’m having trouble downloading The software , is there a planned outage?
Also have you tried the software with the new pascal nvidea cards?
Thanks for the help
Pete
Our servers seem to be down at the moment.
Pascal cards should work just fine.
Hi
I need help I don’t have any idea how to make sandbox
Is it easy to make one !or any link show me how to make one step by step…
I need to use it with autistic children..
Thank you
You should check out the installation instructions page on the main AR Sandbox site, and also the AR Sandbox support forum.
Hello, It is a great program! many thanks. Maybe since last year you adopted “xbox one” dirvers, to the program?
Best regards, keep working 😉
Hi,
is the program open source?
can I edit it to change the parameters etc?
if yes how can I and where does it sais it’s availability(open source etc)?
thank you
above, i meant to say “how can I edit it”….
The AR Sandbox software is released under the open-source GNU General Public license.
Pingback: Faire un bac à sable en réalité augmenté | PVT
Hello, I’m a technician of University of Padova, Italy. At Department of Geosciences we managed to setup an augmented reality sandbox using a Lenovo ThinkStation computer with an Intel i5 processor, 8 GB ram, a Nvidia GTX 610 graphics card and Linux Mint 17.2 with SARndbox 2.3 installed. I followed all the steps showed in your tutorial to complete the calibration, and the software works well on creating level surfaces, but the water functionality don’t work, the software completely ignores the shadow if I put my hand between the projector and the box, I can only define a “Water tool” with key 1 flooding all box interior and 2 drying the box, but there’s no automatic water adding…how can I solve this problem? Thanks, regards.
Are you making the “rain gesture,” where you hold your hand flat, palm down, with all five fingers spread out?
Hello,
I’m a geoscience student currently working at the german GeoForschungZentrum GFZ in Potsdam. We’ve been using the sandbox for about a year now, it’s wonderful and works fantastic.
I am now allowed to use it for teaching purpose (courses on basic mapping and orientation) and plan on using the sandbox especially for explaining contour lines to children. Problem is that I so far haven’t figuered out how to disable the Water Flow Simulation, for it’s not necessary and would mostly be distracting.
Are there any shortcuts to turn it of completely?
I hope it’s ok I’m asking here and I’m sorry if this question has been answered before, I haven’t actually found anything on that (simple?) matter in forums.
Thanks in advance
and greetings from Germany
Add -ws 0.0 0 to SARndbox’s command line to disable water simulation.
Pingback: Build an Augmented Reality Sandbox with Real-Time Topography - Lifehacker Guru
Pingback: Münchner GI-Runde mit Echtzeitplanung | Peter Zeile
Hi,
We built an Augmented Reality Sandbox to the specifications on your website. We installed all of the software and calibrated the sandbox and the kinet as described above. However when we run the actual sandbox, the topographic map works but it is projected in black and white. The contour lines are not displayed in color. Any idea where we may have made a mistake or what could be going on?
Thanks for your help!
Is there a way to temporarily disable the contour lines, but keep the color map? Thanks!
Thank you for the information
Hello! First of all,is there anyway to make the sandbox work with intel realsense D415?
also, what is the biggest and smallest scale that we can make the sandbox into?
Or is it fixed and can’t be made into different sizes?
Someone would have to write a driver. The Kinect package already supports Intel RealSense cameras using the first-generation RealSense SDK, but D415 and D435 only work with the second-generation SDK.
Hey,
I built an AR Sandbox and first of all thanks for your detailed instructions everything worked out without any problems!
I now want to tweak the Sandbox a bit and would like to read the official AR Sandbox Forum (https://arsandbox.ucdavis.edu/forums/forum/ar-sandbox-forum/) for that, but the website is not responding, I tried it over the last 4 weeks but nothing changed, servers aren’t responding not even with a VPN. Is there a problem with my location (Germany) or is it a problem with the website?
I am at Step 7 Calibrating the projector. I assigned Key 1 & 2 to record and move the tie points except the line tie points don’t move from position 1. Any insights?
Also I have a triple USB Pedal. They are recognized as A, B and C how can I change this in Linux Mint to 1, 2 & 3?
Does the calibrator recognize the disk target, indicated by showing a green disk overlaying the previously yellow blob? The calibration won’t move forward otherwise.
Regarding your USB pedal: There is probably a way to change the readout, but the easiest way is to simply bind whatever functions you want to bind to the pedal to the readouts it generates.
Hello, I’m just starting off my project and i’m selecting the laptop(I know PC is better but for some reasons i need to be able to carry that), and I’m wondering can I run the software in Dell Alienware(13th gen i7, NVIDIAN Geforce 4070, 1TB and 32 GB memory) if I install Linux inside of it, like a dual-boot system? Also the usdavis websites specifies the laptop shouldn’t have NVIDIA Optimus Technology incorporated but isn’t it something that can be switched on/off manually?
Really appreciate your help!