Build your own Augmented Reality Sandbox

Update: There is now an AR Sandbox support forum with detailed complete installation instructions starting from a blank/new PC, and a video showing a walk-through of same instructions. You’re welcome to read the rest of this article for context and background information, but please ignore the outdated hardware recommendations and installation instructions below. Instead, use the up-to-date hardware recommendations from the AR Sandbox project page, and follow the instructions linked above.

Earlier this year, I branched out into augmented reality (AR) to build an AR Sandbox:

Photo of AR Sandbox, with a central “volcano” and several surrounding lakes. The topographic color map and contour lines are updated in real time as the real sand surface is manipulated, and virtual water flows over the real sand surface realistically.

I am involved in an NSF-funded project on informal science education for lake ecosystems, and while my primary part in that project is creating visualization software to drive 3D displays for larger audiences, creating a hands-on exhibit combining a real sandbox with a 3D camera, a digital projector, and a powerful computer seemed like a good idea at the time. I didn’t invent this from whole cloth; the project got started when I saw a video of such a system done by a group of Czech students on YouTube. I only improved on that design by adding better filters, topographic contour lines, and a physically correct water flow simulation.

The idea is to have these AR sandboxes as more or less unsupervised hands-on exhibits in science museums, and allow visitors to informally learn about geographical, geological, and hydrological principles by playing with sand. The above-mentioned NSF project has three participating sites: the UC Davis Tahoe Environmental Research Center, the Lawrence Hall of Science, and the ECHO Lake Aquarium and Science Center. The plan is to take the current prototype sandbox, turn it into a more robust, museum-worthy exhibit (with help from the exhibit designers at the San Francisco Exploratorium), and install one sandbox each at the three sites.

But since I published the video shown above on YouTube, where it went viral and gathered around 1.5 million views, there has been a lot of interest from other museums, colleges, high schools, and private enthusiasts to build their own versions of the AR sandbox using our software. Fortunately, the software itself is freely available and runs under Linux and Mac OS X, and all the hardware components are available off-the-shelf. One only needs a Kinect 3D camera, a data projector, a recent-model PC with a good graphics card (Nvidia GeForce 480 et al. to run the water simulation, or pretty much anything with water turned off) — and an actual sandbox, of course.

In order to assist do-it-yourself efforts, I’ve recently created a series of videos illustrating the core steps necessary to add the AR component to an already existing sandbox. There are three main steps: two to calibrate the Kinect 3D camera with respect to the sandbox, and one to calibrate the data projector with respect to the Kinect 3D camera (and, by extension, the sandbox). These videos elaborate on steps described in words in the AR Sandbox software’s README file, but sometimes videos are worth more than words. In order, these calibration steps are:

Step 1 is optional and will get a video as time permits, and steps 3, 6, and 8 are better explained in words.

Important update: when running the SARndbox application, don’t forget to add the -fpv (“fix projector view”) command line argument. Without it, the SARndbox won’t use the projector calibration matrix that you so carefully calibrated in step 7. It’s in the README file, but apparently nobody ever reads that. 😉

The only component that’s completely left up to each implementer is the sandbox itself. Since it’s literally just a box of sand with a camera and projector hanging above, and since its exact layout depends a lot on its intended environment, I am not providing any diagrams or blueprints at this point, except a few photos of our prototype system.

Basically, if you already own a fairly recent PC, a Kinect, and a data projector, knock yourself out! It should be possible to jury-rig a working system in a matter of hours (add 30 minutes if you need to install Linux first). It’s fun for the whole family!

131 thoughts on “Build your own Augmented Reality Sandbox

  1. Pingback: Build your own Augmented Reality Sandbox | Doc-Ok.org

  2. Pingback: … and they did! | Doc-Ok.org

    • There is nothing special to enable water in the sandbox software; it’s on by default. You’ll need to set up a water simulation bounding box (part of the BoxLayout.txt file), as described in the documentation. To test the water simulation — rain can be a bit finicky — create a water tool. Press any key (say ‘1’), and move the mouse while holding that key to the “Manage Water” menu item, then release the key. When the dialog box pops up, press another key, say ‘2’. Then, when you press (and hold) ‘1’, the sandbox will be flooded with water; if you press (and hold) ‘2’, the sandbox will be drained.

      The biggest practical issue is graphics driver support. You need a powerful discrete graphics card, something like an Nvidia GeForce 480, and the vendor-supplied binary drivers, to run the shaders necessary for water simulation. There are good tutorials on installing the drivers for a variety of Linux distributions online.

    • The GPU water simulation code is based on GLSL shaders, not on CUDA or other GPGPU packages. I don’t know anything about nvidia-smi, but that’s probably why nothing shows up. If the water simulation were running on your CPU, you’d get maybe one new frame every 10 seconds or so.

      Another way to check is to use the nvidia-settings utility, click on the tab for your GPU (“GPU 0 – (GeForce GTX 680″ in my case), and look at the”GPU Utilization” value. When the water simulation code is running, it should be anywhere between 20% and 100%.

        • Oh, it’s you! I missed this comment originally and just saw it yesterday. I have a picture of your AR Sandbox on my external installations page, with a link to the article in your newspaper. Do you have a more appropriate / permanent web page to which I can link?

  3. Hello, I have this message:

    [root@localhost SARndbox-1.5]# ./bin/SARndbox -c 0
    0.155481 x 0.15066
    Caught exception Sandbox: Not all required extensions are supported by local OpenGL while initializing rendering windows

    can you help me?
    (it is with CentOS-Linux)

    Thank you

  4. Pingback: AR Sandbox Support Forum | Doc-Ok.org

  5. Hi,
    This is really amazing, I want to build a sandbox in a large format 4x3m for a light festival. Is there any limitations to the size of the kinects field of view?
    best regards
    Edvin

    • 4x3m is much too large to be useful. The fundamental problem is the Kinect’s limited field-of-view and resolution. To cover a 4x3m sandbox, the Kinect would have to be mounted 4m above the sandbox, and 4m is way outside its useable distance range. A 2mx1.5m sandbox would already be very low-resolution. 1×0.75m is about the ideal size.

      At the moment the only feasible approach for large-area sandboxes would be multiple Kinects, but that would be a major installation and maintenance problem, and the software doesn’t support it, either.

  6. Hi! I’ve compiled all the programs and having the system almost working. Here I’m making my test on the wall and having some issues. Can you answer me some questions?

    1- When I run the sandbox (there’s no object on the wall), I don’t obtain the green textured image but a running simulation instead with the water and all environment. What you think would be doing this?

    2- About the BoxLayout file, can you explain me about the coordinates system and how modifications in it reflect on sandbox simulation?

    3- How can be a object from the wall? When i put my hand close the wall, it is recognized in the simulation but if a put something thicker (about 8cm) it start the rain simulation

    I’ll put here some images from calibration and the simulation running.

    http://imageshack.com/a/img537/2954/EyOQA6.png

    http://imageshack.com/a/img538/2831/wztoGI.png (Here in Step 5 My reconstruction always be Over or behind the axis, i didn’t manage to put it between the two coordinates axis like in the video )

    http://imageshack.com/a/img912/8937/tcojHD.png
    http://imageshack.com/a/img633/8228/JV5CRG.png

    Congratulations for your great work! I can’t wait to start it here

    • 1: Without setting up a BoxLayout.txt file, the Sandbox software does not know what the zero-level plane relative to the camera is. The result will be some more or less random topography model.

      2: The first line sets up the geometric plane equation for the zero-level plane relative to the camera, in camera units (can be mm or cm, depending on camera model, in the current software version). The first three numbers define the plane’s orientation as a normal vector. For example, if the camera points straight down, then the vector should be (0, 0, 1) meaning that the terrain’s “up” direction points along the camera’s Z axis. If the camera points down at a 45 degree angle, the vector should be (0, 0.7071, 0.7071), etc. The fourth number is the distance from the zero-level plane to the camera coordinate system’s origin, which is the lens center of the (virtual) depth camera, in camera units. For example, if the camera points straight down, i.e., the normal vector is (0, 0, 1), and the camera is 120cm above the desired zero level, then the fourth number would be -120.0 (negative because the camera looks along the negative Z axis).

      During calibration, you determine the plane equation as described, resulting in a distance value in the appropriate unit, i.e., centimeters or millimeters. You can then adjust the distance value to move the zero-level plane higher or lower, as convenient for your specific setup.

      3: The sandbox has a range of valid topographic elevation values, by default the range of the elevation color map. The software ignores surfaces outside that range. Similarly, there is a valid range for “rain objects.” Both ranges are relative to the zero-level plane defined in BoxLayout.txt, using the camera’s native measurement unit. The BoxLayout.txt file and height color map shipped with the SARndbox software are in centimeters. If your camera uses millimeters (check the distance value in your plane equation), you will have to multiply all elevation values in the color map file by ten.

      Looking at your pictures, your camera works in millimeters. Do the adjustments I mention above, and also run “SARndbox -h” to see a full list of command line options. You need to pass some non-default values to make the software work well, such as a different contour line interval.

      • Very thanks! You helped me a lot! I will change the settings here and tell you the results i get.

        Here’s a video of the simulation running (I made this video before this post).

      • Hi! Think I got this to work now. I did the changes you told me:

        1- In BoxLayout, I multiplied all the box coordinates (only the first line of the file) by 10.
        2- The default value of argument was 0.75. I change it to 7.5 when running the program, using -cls 7.5
        3- In HeightColorMap I multiplied only the first columm by 10. When I multiplied all the values by then the colors become strange. I’ll put here the pictures.

        * HeightColorMap with all values multiplied by ten:
        http://imagizer.imageshack.us/a/img633/1051/eKMfo7.png

        * With only first column multiplied:
        http://imageshack.com/a/img661/3143/IZPIzX.png

        Very thanks again! And congratulation for this great work!

        • 2- The default value of *contour line spacing* argument was 0.75. I change it to 7.5 when running the program, using -cls 7.5

          My full line of execution is:

          primusrun ./SARndbox -cls 7.5 -uhm HeightColorMap.cpt

      • Wonderful. My other question is if you think this is a feasible task for someone that has never done anything like this before. I don’t have experience with computer systems or programming. Is there enough direction in your videos and discussions that I could manage to build one of these? Or would I need someone that has experience with programming?

        • Hard to say for me, but it should be fine. There’s no programming knowledge required to set up an AR Sandbox, but you’ll have to be willing to work inside a terminal window, and ideally to install Ubuntu Linux.

          We have detailed installation instructions on the AR Sandbox Forum; you can check those out and see if they look doable.

        • No, it is probably not doable for someone with no experience in Linux. The instructions will say things like “Open a text editor and type blah blah blah.” If you don’t know how to open a text editor in Linux, you won’t be able to do this. In Linux, you cannot just double click the text editor. You have to type something like
          “cd Sandbox
          sudo nano master.txt
          ctrl-O
          ctrl-X” just to open the text editor and save the file. If that text didn’t make any sense, then you’re going to have a very tough time.

  7. Just wondering about costs, approx is fine 🙂 …I want to create a budget for myself and set a goal to save towards. I am blown away by this project and would really love to make one!!

    • Short-throw projector around USD 600, first-generation Kinect around USD 100. If you already have a computer you can use, the rest is a bunch of plywood, two-by-fours, duct tape, baling wire, and a few buckets of sand. If you want to run the water simulation in a responsive way, you’ll have to invest around USD 300 into a good gaming graphics card, such as an Nvidia GeForce GTX 770.

      • Got it. We’ll give it a try. Wondering how I can tailor the application to compliment the rainforest environment and our eco approach. Any ideas?

        • That’s a good question. We haven’t thought about it in that context, and it’s probably not directly applicable as it is, but one potential addition we’ve discussed was a simplified simulation of vegetation growth in response to water availability. It’s a pretty big piece of work, though.

          • That would be useful for a site like this. It is for an eco reserve retreat center. Sitting on 250 acres of primary rainforest. here is our website: http://www.samasati.com . Perhaps a simulation that can help us to better manage our water resources would be helpful. We have been catching water on site for almost 20 years now.

  8. hei, first up: thanks for your wonderful project!
    I`m in the middle of setting up my own sandbox for a school project, but unfortunatelly the server is still down.
    any news on when its going online again?

  9. The site that was hosting your code seems to have taken it offline. The URL is returning a 404 now. Is there a mirror somewhere else? Tks!

  10. Pingback: From-scratch AR Sandbox Software Installation | Knowledge

  11. Hi. I managed to build a sandbox by your description – thank you for the tips! I tried to register for the sandbox forum but did not receive an email with my password – help please (via e-mail). Greetings from Poland.

  12. Hi, my engineering class and I have recently built an AR Sandbox and we are really enjoying it! I was just wondering if it was possible to change the color of contour lines to make them seem more visible?

    • You can set the color by editing line 96 of the SurfaceAddContourLines.fs shader file, which is located in the share/SARndbox-<version>/Shaders/ subdirectory of the SARndbox source directory:

      baseColor=baseColor=vec4(0.0,0.0,0.0,1.0);

      The first three numbers are the red, green, and blue color components, respectively, normalized to [0.0, 1.0]. After editing the file, you need to restart the SARndbox application to see the change.

  13. Hi, first off thank you for the support you have been giving people, it has been a really fun project to work on!

    So my sandbox is built and calibrated, the red x was perfect in high/low positions across the sandbox when running calib. however I think the plane equation I have is wrong or the file is in the wrong location. If you were looking from where the kinect is, the bottom of the sandbox has the water level much much higher than at the top. please see photos below.

    http://imgur.com/EAsu7iX,aOjwAxO,InR4Itd,PBIDwQy#0

    first pic is boxlayout.txt next 3 are sandbox, sorry for rotated images. but you can see what I mean about water super high on bottom and really low towards top. From your above post I adjusted the first 3 numbers and 4th in the plane equation hoping to correct it, ie the (0,1,1) method described above, but of the 50 different adjustments I made nothing changed at all when relaunching the sandbox software, I am wondering if I have the boxlayout.txt in the wrong location hence the changes not reflecting in the sandbox? its current path is

    ~/src/kinect-2.8-001/kinect/boxlayout.txt

    everything is installed under the src folder..

    once I get the plane level with the sandbox how do you raise and lower the water level (sea level)? like what file is that and what line do you adjust sea level.

    and lastly I am making this to simulate lava flows so I am wanting to use a color scheme of grays and change the water to orange/yellow for lava. I saw your post above about changing contour line colors which I tested and it works. Can you elaboration on what file you have to edit and line to adjust colors for terrain elevations? I would be really appreciate it.

    Thank you so much for all the work you have done on this.

    Josh

    • ok, I re-read the above posts and searched for boxlayout.txt and found it in another location, I copied the good data into that file and now the plane is no longer tilted. yay!

      now I have been in heightcolormap file for about an hour messing with elevations and colors and have made my volcano file and an island/mountain file. yay

      So I guess my only question is where is the setting to change the color of the water for the lava, like what file and line?? I’m pretty sure i read through this and did not fine that info, once again thank you for your time.

  14. Hi
    I am new here and before i go ahead and possibly make one of these, I want to know
    1). Is it possible to do some programming on the project, I am thinking of an idea and want to know what language I can use to build on the system or what language it is written in.
    2). Are there any plans to create one for the X Box ONE
    Thanks
    Greg

      • okay thanks for the reply, I know Java and Python, guess I will have to start learning C++ on my off days from work and while building the Sandbox. Will be back with more questions then no doubt.
        Thanks
        Greg

  15. Hi forum,

    I find this very interesting and would like to experiment it to design some of my tutorials.

    However, to be more specific and to avoid any hiccups, I would be grateful if I can be recommended with some specifications for the below.

    1. 3D Camera (model, approx cost)
    2. Projector (model, approx cost)
    3. Sand (is it a special type of sand?, if yes where can I buy it? cost?)

    Thanking you

    Sandeep

  16. I’m working to put together specs for a PC to run the software. Would there be any issues with utilizing a Quadro card as opposed to a GeForce card?

    • No, the Quadro will work fine, but you most probably won’t get any benefits from it, either. Bottom line, you’ll be paying ten times as much for the same result. Unless you already own a Quadro, of course.

      • What aspects of the video card is the software utilizing? Is it the amount of VRAM on the card, the bus size (128-bit, 192-bit, 256-bit, etc..), core speed, memory speed? I read a comparative blog you wrote up between the variance between the Quadro and GeForce cards. I work for the City and the department I support is wanting to set up an exhibit. They’ve given me a budget to work with that I’m trying to keep within. The hard part is they want it to run from a laptop, so that everything is self-contained within the exhibit without the need for an external monitor needing to be setup each time calibration is needing to be done. Obviously this makes it slightly more expensive since laptops are more costly than desktop setups. My vendors we utilize unfortunately can’t quote me pricing for consumer grade laptops, only business grade and the closest to the specs mentioned for the sandbox would be engineering grade laptop with a Quadro or Firepro card in it.

        So I’m just trying to understand a little bit about the specs and the utilization of the hardware by it, so I can properly assess which hardware to purchase.

        Thanks,

        • It’s hard to break it down by specs; only a benchmark can really tell how each individual system performs. As a rough guide, the most important criteria are number of CUDA cores and core clock. The code has to do a certain number of calculations per second, and the number it can do is roughly clock speed times number of cores (assuming that the cores are identical between systems, which they happen to be between Quadro and GeForce). VRAM is a binary threshold: you need a certain amount to represent the simulation state in memory. If you have less VRAM, it won’t work at all; if you have more VRAM than you need, it won’t improve performance at all. The AR Sandbox uses less than 2GB of VRAM with standard settings (probably much less than 2GB, but I don’t have exact numbers at hand).

          Would you be able to buy from Dell, specifically a gaming laptop from their Alienware line (this one here seems pretty high up in specs, for $1500), or from MSI or Acer? You could get one with a high-end latest-generation mobile GeForce. I think the most expensive ones run up to $2000, probably still less than a business laptop.

          I understand that you work under certain constraints, but in general I advise against using a laptop to run the sandbox. Not only do you pay much more for the same performance, but you might also run into reliability issues such as overheating. There are a couple of users on the AR Sandbox Forum who use laptops. You could ask there for their experiences.

          • Thanks for the info. I think this definitely helps me better understand what I need.

            Dell is actually our primary computer vendor. They can’t quote be a business price as their Alienware computers are consumer grade and wanted to quote me pricing for the Precision line. The issue I was running into especially now knowing about more about the software, is how the cores are calculated between the two cards. Comparably the 970M GeForce 1280 CUDA Cores and 3GB VRAM, whereas lets say the Quadro k2100M has only 576 Parallel CUDA Cores. Is the parallel nature on the Quadro cards the equivalent of say (576 * 2) or are you literally looking at less than half the amount of CUDA cores on that card? Cause to get close to the same amount of CUDA cores on a Quadro you are looking more along the lines of the K4100M and roughly about $2200 laptop instead of $1500-1700.

            Do you know how that works between “Parallel cores” on the Quadro cards and the regular cores on the GeForce cards?

          • They can’t quote be a business price as their Alienware computers are consumer grade and wanted to quote me pricing for the Precision line.

            Jeez.

            are you literally looking at less than half the amount of CUDA cores on that card

            That’s right, it’s just a phrasing difference. All GPU cores are parallel. A Quadro 2100M should be less than half the performance of a GeForce 970M, as I think it also has a lower core clock. In general, Quadros and GeForces are the same GPU architecture, just aimed at different markets. The lowest-level Quadros will be comparable to lowest-level GeForces, and the same is true at the highest level. A GeForce 970M is pretty high up in the mobile range, and you’d have to go to a Quadro K4100M or even K5100M to get the same computing power.

  17. Hello, i want to do this augmented reality sandbox for a project in school, but the problem is that i don’t even know where to start. For example i don’t know if the kinect needs to be connected to the data projector or the laptop and i don’t know what cable to use. I have a Xbox kinect 3D (1414), a data projector and a macbook(i don’y know if it’s going to work with this computer) i really need your help can you contact me by email please?

    • You need to connect your Kinect to a USB port on your computer. The Kinect does not have a standard USB plug, but there should have been a USB adapter with an AC power plug in the box.

      The AR Sandbox software should work on Mac OS X, but I haven’t tested it in a long time.

    • It doesn’t have a specific name. The connector (from the Kinect’s non-standard plug and a plug-in DC adapter to a standard USB 2 plug) used to be packaged in the box when you bought an Xbox 360 Kinect, but that no longer seems to be the case.

      I have no experience with third-party adapters, which you can buy from Amazon.com for example. However, there are several threads about that issue on the AR Sandbox support forum. It seems not all adapters work well, and at least one post on the forum has a link to a shopping page for one that works.

  18. Can we use a Mac laptop or is it necessary a PC? And Also we have an Optoma hd720x dmd projection display the model hd70 can we use it ? You can search the model on internet

  19. hellooo oliver! i just have an other question regarding the graphics card. Is it really necessary ? because it is quiet expensive

    • If you want to run the water simulation, a dedicated graphics card is definitely necessary. If you only want to run topographic shading and contour lines, you might be able to get away with using the integrated GPUs on modern Intel CPUs.

      • To that end, might Intel Iris (integrated graphics on CPU but allegedly improved over older hardware) be sufficient? Have you heard of anyone trying this with something like a Raspberry Pi, Intel NUC, or Compute Stick? Thanks!

      • So I suppose that the computer needs to be a recent good model because if it’s not the graphics won’t be good enough?
        Because what I don’t understand is why people buy a graphic card that costs 300$ when they can simply use the one in their laptops

        • An integrated graphics card, typically a part of the CPU itself as in Intel’s HD graphics adapters, might be able to run the topography color and contour line components of the AR Sandbox, but it will not be able to run the water simulation. For that, you need a dedicated high-performance graphics card.

          • Hello it’s me again! I wanted to ask you what the calibration is for because I didn’t quite understand it. And also what is the role of the Kinect and the projector in the Ar sandbox

          • The Kinect captures the sand surface as a three-dimensional object, so that the computer can use it as a basis to create topographic colors, contour lines, and as ground for the water simulation. The projector’s job is to take the images created by the computer, and paint them onto the real sand surface.

            Calibration is the process that lines up what the Kinect sees and what the projector paints, so that things show up in exactly the right places. For example, if you build a hill in the sandbox, the top of the hill should have a different color than the base, and there should be rings of contour lines running around the hill. If you make a small hole (like poking your finger into the sand), the sand inside the hole should have a different color.

  20. So can we only use the topographic software? because we will not be able to have a high-level graphics card. And if I download Linux on Mac OS X will it work ?

    • It will not work without a high level graphics card. With a mid range gaming graphics card, you can probably do it without the virtual water, but with the water, you need about a $1,600 computer to run this. There’s no way around that, it requires massive graphics processing to accomplish.

      • The “dry” sandbox can run off the integrated graphics processor that’s part of Intel Core CPUs. You only need a discrete graphics card to run the water simulation, but then you want a powerful one. We recommend any brand of Nvidia GeForce GTX 970, which sell for around USD 300 right now (April 2016).

        We recently assembled a new computer with the recommended specs (Intel Core i5 4690K @ 3.5 GHz, 8GB RAM, 60GB SSD, Nvidia GeForce GTX 970) for the Washington DC exhibits, for a total cost of USD 766.69 including tax and shipping from newegg.com.

  21. Hi thank you for sharing your knowledge
    I need a AR sandbox urgent in november can you tell me when i can find a company or a person can do it for me in UAE
    my budget is 2500 to 3000 USD
    (9000Aed to 12000Aed)

    • Hi Suaad, I am working on AR sandbox in Jeddah Saudi Arabia.
      My budget is almost the same as yours.
      I think we may help each other on this.

  22. Hi! I’m wondering if you know if anyone has attempted (or whether it would be possible) to make a much smaller sandbox than the one I have seen (As if it were a mini zen garden on an individuals desk) using a small projector? Do you know if that’d be possible, a project half the size or smaller of the ones I am typically seeing? Thank you,

    – Cassie

    • The primary limiting factors are camera size and scanning range and projector size. There are small projectors, but they are typically LED-based and have much lower brightness (<300 lumens vs the current projector's 3200 lumens). This might still work in a darkened room.

      The Kinect camera doesn't come in a smaller form factor, and in addition it has a minimum scanning distance of approximately 0.5m. If you push it towards the sand surface as far as possible, you'd end up with a sandbox size of 0.5m x 0.375m. The smallest 3D camera I've used is Intel's RealSense R200 camera (pictured here), but it has a minimum scanning distance of about 0.7m.

      • Thanks, that makes sense. Since I posted that I’ve seen more videos and read more documentation where I can see the limitations the Kinect runs into when people try to do finer detail strcutures (such as pushing a 3D printed topology model underneath it)

    • There is a wide-angle lens adapter available for the Kinect, that allows players using it as designed to stand closer to it (for smaller playspaces). However, it can affect resolutions and sensitivity, and I have a feeling there aren’t many who’ve tried using it in this context…so you’d be blazing a new path!

      • There’s an issue with that. The AR Sandbox relies on 1:1 mapping between the real sand surface and the augmented reality projection, which in turn relies on undistorted, to-scale 3D reconstruction inside the Kinect camera. This reconstruction is based on known optical properties of the IR pattern emitter and the IR camera (which are calculated per-device during factory calibration). Adding third-party lenses to one and/or the other will change those properties. Pattern-based skeletal tracking, as used by Xbox Kinect games, might still sort-of kind-of work, but 3D geometry reconstruction will most probably not. Custom intrinsic calibration, as demonstrated in this video, might be able to adapt to custom lenses, but if those lenses introduce non-linear distortion (and the kind of cheap lenses used in available adapters most probably would), then all bets are off.

        In short, I strongly advise against messing with the Kinect’s optics when using it with an AR Sandbox.

        • When it comes to throw ratios, I understand that 4:3 is ideal but I am seeing throw ratios on projectors such as .51 or .49… am I wanting to be sure it is close to 1.0 for throw ratio (Assuming I am building to the standard rather than a smaller AR Sandbox that I mused about)

          Thanks for the website and the inspiration this has been an exciting day exploring all of this 🙂

          • Those are two different things. One is aspect ratio, the ratio between the width and height of the projected image. A 1024×768 projector has an aspect ratio of 4:3, a 1920×1080 projector has an aspect ratio of 16:9. Throw ratio, on the other hand, is the ratio between distance from the projector to the screen and width of the projected image. It determines how far away a projector has to be to create an image of a desired size.

            Ideally, the throw ratio of an AR Sandbox projector should match the field-of-view of the Kinect camera, which is close to 1:1. Projectors with throw ratios close to 1 are usually referred to as “short-throw.” Standard projectors, for home theater or business applications, generally have throw ratios upwards of 1.7. The .51 or .49 numbers you are seeing are probably inverse throw ratio, i.e., image width divided by projection distance.

            With a 1:1 short throw ratio projector, creating an image 40″ wide (and 30″ tall with a 4:3 aspect ratio) requires a projection distance of 40″.

    • That depends on a lot of circumstances, but if you build everything yourself, it’s between USD 2000 and USD 3000, assuming you can’t use components you already have.

  23. Pingback: Build An Augmented Reality Sandbox With Real-Time Topography | Lifehacker Australia

  24. Hello,

    I’ve been trying to register to the Lake Visualization forum but it isn’t sending out registration emails to me, so here goes:

    I have tried extremely hard to get the projector recommended in your writeups. Unfortunately i have had third projector seller (ebay, amazon, etc.) in row that has cancelled purchase claiming they do not actually have it… I am trying to figure out what the best substitute would be but having a hard time obtaining recommended BenQ short throw… I even contacted BenQ and explained issue and they said they do not have any refurbs to sell me. What do you think my best option for alternative projector would be, and how would that change the design of the AR box?

    Thank you,

    – Cassie

  25. Hello,

    How do we access AR/VR mode? We are hoping to experiment with the dual-mode feature to view the terrain and water flow on a secondary display.

    • The AR/VR feature is in the yet-unreleased version 2.0 of the SARndbox package. With the current version you can already open multiple windows, but they will all show the same thing.

  26. hai sir, thank you very much for sharing your knowledge with us. I just want to know is there any option for converting from water mode to lava mode.(can u please tell me the procedure)

  27. Hello,

    Thanks a lot for every things you do !!!!
    First; excuse my poor english…
    I would like to know if is it possible to make a sandbox with half dimensions (for all dimensions) ?
    Thanks a lot for answer.

    Regards.
    Tibo

    • You can make the box smaller, but the Kinect has a minimum scanning distance of about 50cm. Meaning, if you scale all dimensions of the setup to 1/2, the sand surface will be right up to that limit, and the sandbox won’t work. You need to keep the Kinect high enough above the sand, which will lead to overscan, but it’s not a fundamental problem.

      • Thank you for the answer.
        My english is poor but if i understand correctly, i Can but i must do some adjustment(kinect must be over 50cm)… Right ?

        An other question plz : Can i use something else for sand ? The aim is to had something more transportable…

        Thx a lot !

        • Yes, the Kinect needs to be 50cm above the highest level you want to scan, including the space where you hold your hand to make it rain.

          You can use any kind of material, but it should be light-colored to best reflect the projected colors.

  28. Hello,

    Firstly thank you, I have had so much fun building and running the sandbox.

    We have the sandbox up and running fine but am still unsure whether some settings can be changed and if so how to do it. Is there anywhere with instructions for changing such things.

    Can you change rainfall intensity?
    Can you set how long it rains for or does this depend on how long the button is held?
    Can the flow rate of the water be changed?
    How do you turn the water to Lava, I presume this is just a colour change?
    How do you change map colours?

    Are there any other features that can be changed I have not thought about? it would be really interesting to know all the different things you can do and show with the sandbox.

    • You can change rainfall strength and many other parameters via SARndbox’s command line. Run SARndbox -h from a terminal to see all recognized options.

      Rainfall duration is always how long the assigned button is pressed, or how long your hand is held above the sandbox.

      There is a dialog window inside SARndbox where you can change fluid viscosity and an overall simulation time scale factor.

      You can create and load custom height color maps via the command line. The color map format is very straightforward: a text file with one map entry per line: elevation relative to base plane, followed by red, green, and blue color components between 0 and 255 each.

      There is a thread about how to change the water to lava somewhere on the AR Sandbox support forum.

  29. Can you please help me how i would install the sandbox software and run it on which software I need to present it in a technology exhibition please help me

  30. Hey, firstly thanks you Oliver for the software and the whole project, I’m in process of building a my second sandbox, I’m having trouble downloading The software , is there a planned outage?
    Also have you tried the software with the new pascal nvidea cards?
    Thanks for the help
    Pete

  31. Hi
    I need help I don’t have any idea how to make sandbox
    Is it easy to make one !or any link show me how to make one step by step…
    I need to use it with autistic children..
    Thank you

  32. Hello, It is a great program! many thanks. Maybe since last year you adopted “xbox one” dirvers, to the program?
    Best regards, keep working 😉

  33. Hi,
    is the program open source?
    can I edit it to change the parameters etc?
    if yes how can I and where does it sais it’s availability(open source etc)?
    thank you

  34. Pingback: Faire un bac à sable en réalité augmenté | PVT

  35. Hello, I’m a technician of University of Padova, Italy. At Department of Geosciences we managed to setup an augmented reality sandbox using a Lenovo ThinkStation computer with an Intel i5 processor, 8 GB ram, a Nvidia GTX 610 graphics card and Linux Mint 17.2 with SARndbox 2.3 installed. I followed all the steps showed in your tutorial to complete the calibration, and the software works well on creating level surfaces, but the water functionality don’t work, the software completely ignores the shadow if I put my hand between the projector and the box, I can only define a “Water tool” with key 1 flooding all box interior and 2 drying the box, but there’s no automatic water adding…how can I solve this problem? Thanks, regards.

  36. Hello,
    I’m a geoscience student currently working at the german GeoForschungZentrum GFZ in Potsdam. We’ve been using the sandbox for about a year now, it’s wonderful and works fantastic.
    I am now allowed to use it for teaching purpose (courses on basic mapping and orientation) and plan on using the sandbox especially for explaining contour lines to children. Problem is that I so far haven’t figuered out how to disable the Water Flow Simulation, for it’s not necessary and would mostly be distracting.
    Are there any shortcuts to turn it of completely?

    I hope it’s ok I’m asking here and I’m sorry if this question has been answered before, I haven’t actually found anything on that (simple?) matter in forums.

    Thanks in advance
    and greetings from Germany

  37. Pingback: Build an Augmented Reality Sandbox with Real-Time Topography - Lifehacker Guru

  38. Pingback: Münchner GI-Runde mit Echtzeitplanung | Peter Zeile

Please leave a reply!