Installing and running first Vrui applications

In my detailed how-to guide on installing and configuring Vrui for Oculus Rift and Razer Hydra, I did not talk about installing any actual applications (because I hadn’t released Vrui-3.0-compatible packages yet). Those are out now, so here we go.


If you happen to own a Kinect for Xbox (Kinect for Windows won’t work), you might want to install the Kinect 3D Video package early on. It can capture 3D (holographic, not stereoscopic) video from one or more Kinects, and either play it back as freely-manipulable virtual holograms, or it can, after calibration, produce in-system overlays of the real world (or both). If you already have Vrui up and running, installation is trivial.

Assuming that you installed Vrui as laid out in my guide, open a terminal and enter the following:

> cd src
> wget -O - | tar xfz -
> cd Kinect-2.7
> make && make install
> cd

This will download and unpack the Kinect tarball (.tar.gz archive), enter the source directory, and build and install the package. It will automatically end up in the ~/Vrui-3.0 directory, because it’s considered an add-on to Vrui itself. If you installed Vrui in a different location than ~/Vrui-3.0, replace the make command with:

> make VRUI_MAKEDIR=<path to Vrui>/share/make
> make VRUI_MAKEDIR=<path to Vrui>/share/make install

where <path to Vrui> is the full directory name where you installed Vrui, such as /usr/local/Vrui-3.0. Or you can edit the “VRUI_MAKEDIR := $(HOME)/Vrui-3.0/share/make” line in the makefile; it’s up to you.

Ideally you want to calibrate your Kinect intrinsically before using it (intrinsic calibration derives the camera parameters, such as focal length, skew, and lens distortion, and the alignment projection between the depth and color streams), but it works so-so even without calibration, thanks to the factory calibration data stored in the firmware of each Kinect. But for advanced uses, calibration is explained in detail in the Kinect package’s README file, elsewhere on this blog, and in a series of YouTube videos.

There are two main applications: RawKinectViewer and KinectViewer. RawKinectViewer is mostly a calibration and testing utility. It shows the raw depth and color streams of a single Kinect side-by-side and is not only the central utility for intrinsic and extrinsic (placement and orientation of the camera in world space) calibration, but also useful for aiming a Kinect camera at a target object. RawKinectViewer doesn’t require any command line options. By default, it connects to the first Kinect device connected to the host computer. If you have multiple Kinects, you can select which one you want by passing a zero-based index on the command line, such as “RawKinectViewer 2″ to connect to the third Kinect device.

KinectViewer is a full 3D viewer for real-time virtual holograms. You connect it to a Kinect camera by passing -c <device index> on the command line, where <device index> is a zero-based index like in RawKinectViewer. You can connect to multiple Kinects by passing multiple -c <deviceIndex> options. For example, to connect to the first and third Kinect camera on your computer, run:

> KinectViewer -c 0 -c 2

Being a Vrui application, KinectViewer can be run in Oculus Rift mode, with or without Razer Hydra devices, just like any other:

> KinectViewer -c 0 -c 2 -mergeConfig OculusRift.cfg -mergeConfig RazerHydra.cfg

KinectViewer’s main menu gives you access to a “streamer dialog” for each connected Kinect device. Here you can toggle the streams on/off, capture a background image for depth-based background removal, and change renderer settings.

The main menu also has a “Save Streams…” entry. This one will act as a holographic video recorder, and dump all data streams coming from all Kinects to a set of compressed files with a common prefix selected via a file selection dialog. Later on, you can play back such recorded streams by passing -f <file name prefix> on KinectViewer’s command line. You can then watch the recordings from any angle, point of view, or scale.

Nanotech Construction Kit

The Nanotech Construction Kit (or NCK for short) is a great application to get your immersive modeling on. It’s a very simple application, and can be used to build very complex structures with little practice. It’s also easy to install. Again, assuming the default Vrui installation:

> cd src
> wget -O - | tar xfz -
> cd NCK-1.9
> make INSTALLDIR=$HOME/Vrui-3.0 && make INSTALLDIR=$HOME/Vrui-3.0 install
> cd

For simplicity, this will install NCK right in Vrui’s installation directory. I normally recommend keeping Vrui applications separate from Vrui, but it’s OK here. If you installed Vrui elsewhere, or want to install NCK elsewhere, you know what to do.

To start NCK in Oculus Rift / Razer Hydra mode, type

> NanotechConstructionKit -mergeConfig OculusRift.cfg -mergeConfig RazerHydra.cfg

When the program starts, it shows nothing but an empty box drawn as wireframe. This box is the simulation domain: all building blocks (atoms or base units) will live inside this box. The first thing to do is bind 6-DOF dragging tools to one button on each Hydra handle. I generally recommend using the upper shoulder buttons (“LB” and “RB”). To bind a tool to LB: 1) press and hold LB and a tool selection menu pops up; 2) move the handle to select “6-DOF Dragger” from the “Dragger” submenu; 3) release LB. Repeat the process for RB.

Next, set up the program to create new building blocks when you press LB or RB without touching an existing building block. Press and hold either “2″ on the left handle or “1″ on the right handle (depending on whether you’re left- or right-handed) and the program’s main menu pops up. From the “Structural Unit Types” submenu, select “Triangle” to build fullerenes like Buckyballs or nanotubes, or “Tetrahedron” to build silica crystals like quartz or feldspar, and release the menu button to confirm. Finally, go to the main menu one more time, and select “Show Unlinked Vertices” from the “Rendering Modes” submenu.

From now on, whenever you press LB or RB in empty space, a new building block will magically appear out of thin air. To pick up and move an existing building block, move a Hydra handle so that the tip of the grey cone pokes the grey building block (it’s easiest to aim for dead center), and press and hold the assigned dragger button on that handle (LB or RB).

It’s tempting to aim for the red spheres when grabbing building blocks, but resist. Those don’t count for picking up building blocks. Instead, the red spheres are unsatisfied bonding sites. To bond two building blocks, move them so that two of their red spheres touch, and they’ll bond and snap into place. That’s really all there’s to it.

One important note: the current version of the Nanotech Construction Kit runs synchronously, meaning that there will be a rendering pass after every simulation step. This is not good if your OpenGL driver is set up to synchronize with your display’s vertical retrace, because then there will be at most 60 frames, and therefore 60 simulation steps, per second. Molecules will feel like jelly. Try turning off retrace synchronization. Using the Nvidia driver, for example, open nvidia-settings, go to the “OpenGL Settings” tab, and uncheck the “Sync to VBlank” box. This will make NCK immensely more snappy. And typically there won’t be any problems with image tearing either. The 2.0 version of NCK will finally render asynchronously, so it won’t be a problem any longer.

 CAVE Quake III Arena

This oldie but goodie is the unofficial “Hello, World” of VR. It’s also one of the very few Vrui applications not developed for Vrui from the ground up. Instead, I took the existing CAVE Quake III Arena application developed by VisBox‘s Paul Rajlich (who, in turn, adopted Stephen Taylor’s Aftershock renderer), and replaced all the window management and user interface mechanisms with Vrui. I mentioned in a previous post that Vrui, while not a game engine per se, would make great infrastructure for one. CAVE Quake III Arena (which isn’t really a game engine either, but let’s make believe for now) was an early test case that worked out well.

To run this application, you not only need the source code, but also the game files from the original retail release of Quake III Arena. You can probably pick that up for a buck fifty from a yard sale. It’s for sale on Steam as well, but it might not work because Steam probably wraps it in DRM. You’ll need the “raw” pak0.pk3 main game file. I checked Good old Games, but they don’t seem to have it. If you managed to score a pak0.pk3 file somehow (I still have the disk from when I bought the game back in ’99 — amazingly, the native Linux version came out just three weeks after the Windows version), here’s how to install and run CAVE Quake III Arena:

> cd src
> wget -O - | tar xfz -
> cd CaveQuake-2.1
> make INSTALLDIR=$HOME/Vrui-3.0 && make INSTALLDIR=$HOME/Vrui-3.0 install
> cd
> cq3a maps/q3dm4.bsp -mergeConfig OculusRift.cfg -mergeConfig RazerHydra.cfg

The Aftershock renderer is not full-featured and somewhat buggy, and as a result some of the original levels don’t work properly. maps/q3dm4.bsp is my personal favorite of the ones that do. Some hero coder should do what I did to Paul’s CAVE Quake III Arena and take the now-released original id Software source code, and graft it on top of Vrui. That would be really neat. After 14 years, Quake III Arena is still a very fun game, and very different from modern shooters.

There are a variety of ways to get around the game levels. The default two-handed 6-DOF navigation tool mapped to the Hydra doesn’t work well at all. You don’t get the feeling of being in a real place (as “real” as a Quake III level can be, anyway) if you can pick up the world with your hands. Instead, use any of the surface-based Vrui navigation tools. But if you’ve already grabbed the world and moved it, it is probably nauseatingly tilted at this point. Simply open the main menu, and select “Reset Navigation” to fix that.

To get started, put the Hydra away for a second and use Vrui’s standard first-person navigation tool. Press “q” to activate it, which will show a minimalistic HUD with a typical FPS reticle and an overhead compass. The HUD is rendered at an appropriate position and distance in the 3D view (which is configurable, of course). It’s supposed to show up green, but there is some OpenGL state management bug deep inside the Aftershock engine that I haven’t fixed, so it shows up black at the beginning. It’ll correct itself later.

Once active, the FPS navigation tool works like, well, an FPS navigation tool. Press w to walk forward, s to backpedal, a or d to strafe, and the space bar to jump. Use the mouse to rotate your view left and right. Moving the mouse up or down will not rotate the view up or down, because that’s explicitly disabled in the OculusRift.cfg configuration file. If you want to check, open that file and find this section:

section FPSNavigationTool
	rotateFactors (48.0, 0.0)
	hudDist 144.0
	hudRadius 72.0
	hudFontSize 2.0

You can see how the second rotate factor is set to 0.0, because rotating the view up/down with the mouse while wearing an HMD will make you sick. If you want to try, change the number to 64.0, grab a bucket, and go ahead. This section, by the way, is also where you can change the HUD position, layout, and color (by adding a “hudColor (<red>, <green>, <blue>)” tag. For all the other configurable settings, check the “Vrui Tool Configuration File Settings Reference” document in Vrui’s HTML documentation.

The second tool to try is the “Valuator Fly and Turn” navigation tool that’s already bound to the joysticks on both Hydra handles. Turn off the FPS tool by pressing q again, and gently push forward on one of the joysticks. You’ll start gliding in the direction in which you’re pointing the handle whose joystick you pushed. Pulling the stick back will move backwards; pushing it left or right will rotate the view in that direction.

Another tool to try is the surface-aligned “Walk & Valuators” tool, which can be mapped to the joysticks on one of the Hydra handles. First you need to unbind the tool already there. Look for the small red square in the lower-right hand corner of your forward view. That’s Vrui’s “tool kill zone,” it’s used to delete unwanted tools. Hold one of the Hydra handles so that the tip of its cone appears in front of the red box, and push that handle’s joystick forward. This will show a small box with text indicating what tool is currently bound to that joystick axis. If you let go of the stick again while still holding the handle over the kill zone, the tool will be destroyed. Now you can assign a new tool by pressing down on the stick (as if it were a button) and holding it. This will open a tool selection menu; from the “Navigation” -> “Surface-aligned Navigation” submenu, select “Walk & Valuators,” and let go of the stick. This will bring up a dialog prompting you to assign the left/right function. Push the same stick to the right, and let it go again. You’re now prompted for the forward/backward function; push the stick forward and let go. Finally, you’ll see a “Jetpack” function; ignore it, and finish tool creation by pressing down on the stick again.

You can activate this new tool by pressing down on the stick. This will show a similar HUD, and if you look at your feet, you’ll see two concentric circles and a wedge. To move around, push the stick in any direction, and you’ll glide in that direction. To rotate your view, look left or right. The more you look to your left, the faster the world will rotate to your right, and vice versa. The bottom line is that whatever direction in the world you want to look will end up aligned with the ideal forward direction, i.e., facing your keyboard and monitor. The movement speed, rotation speed, and angle dead zone can all be configured via OculusRift.cfg.

The last thing you can do is create a weapon. Assign a “Locator” -> “6-DOF Locator” tool to any button on one of the Hydra handles. This will probably create a shimmering purple box that’s attached to your hand (and, incidentally, it will turn the HUDs green, and the main menu grey, as they should be). Since you don’t want a box, delete the tool again by holding the Hydra handle over the tool kill zone and pressing the same button. Every time you bind a 6-DOF locator, you will cycle through the available weapon models. After a few iterations, you might end up with a shotgun, or a BFG. Depending on the level, not all these models will be defined, which is where the purple boxes come from. Stupid bug.

The weapons don’t do anything, but they’re still fun to wave around. You can also create a more elegant weapon, for a more civilized age, by binding a “Pointer” -> “Jedi Tool” to an arbitrary button. Pressing that button afterwards will toggle the tool on/off. Sound effects to be supplied by the user.

36 thoughts on “Installing and running first Vrui applications

    • Nitpicking. :) A single Kinect facade is defintely 2.5D due to being collected from a single point of view. But you can combine multiple facades for a more complete view. And “real” holograms have the same problem, to a lesser degree. They, too, can’t show what the laser light field doesn’t see. The light field is diffused, so it has more coverage than a single Kinect facade, but it isn’t all-seeing either.

      I can see where I’m pushing it, but with those qualifications, in terms of quickly getting the main point across, I’m comfortable with using the h-word.

      • The image created by the Kinect has more in common with stereoscopic images than holograms; the only differences that makes it stand out from regular stereo is the parallax is calculated based on a hardcoded pattern instead of a second image of the same subject, and the color comes from a separated camera. Holograms capture volumetric data and even stuff like the refractive behavior of transparent objects; meanwhile Kinect gives just a flat picture plus a heightmap.

        • I disagree with “Kinect is mostly like stereoscopic images.” It’s really fundamentally different from that: a stereoscopic image only creates a 3D impression when viewed by a human (or animal, I guess) brain. It conveys only a single view of the captured objects, and cannot be viewed from other angles. The reconstruction from image to 3D model is done by the viewer, in other words.

          A 3D image captured by Kinect et al. is a 3D description of a partial object that can be viewed from any point of view; the conversion from image to 3D model is done in the camera.

          You’re right about Kinect not capturing optical properties, of course. The kind of 3D imagery captured by Kinect et al. is in between 2D or stereo images and true holograms, for sure, but I argue it’s closer to holograms than to images. If nothing else, then from a technical perspective: I can render Kinect data as 3D objects in a 3D graphics application, but I cannot do the same with stereoscopic images.

          But I agree with the general sentiment, it’s stretching the definition.

  1. Pingback: Gemischte Links 14.08.2013 | 3D/VR/AR

  2. Hi there!

    I was wondering if there is anyway to get the Kinect for Windows working with vrui.
    Can libfreenect help? I have a Kinect for Xbox and a Kinect for Windows sensor. I was hoping to use them simultaneously.


    • It’s definitely possible. The main problem is that I don’t have access to a Kinect for Windows, so I can’t experiment with the USB protocol. I don’t even know first-hand whether my existing code supports KfW; someone else told me they tried one and it didn’t work. If libfreenect supports KfW I can have a look at their code to see what the differences are, but without a device to test it’s an uphill battle.

      • Thanks! I have tried using KfW with Vrui-2.7; the device was not detected. Libfreenect ( supposedly supports KfW. The “glpclview” example in the repository is a very basic 3D reconstruction program. If the implementation of libfreenect in Vrui is trivial, I can help test it out on KfW :) Regards.

        • I think the detection problem is a minor matter. KfW has a different USB vendor:product ID than KfX. If you do an lsusb when the KfW is plugged in, you’ll get the new values, and you can put them into Kinect/Camera.cpp for a quick test, to see what happens.

          The bigger problem is that Microsoft seems to have changed the USB protocol between the Xbox and Windows versions, and fixing that will be harder.

          • Turns out libfreenect is very unstable with KfW. I am just going to stick with KfX. On a side note, I am experiencing a couple of problems with Kinect-vrui. After I complete the intrinsic calibration procedure for my KfX, the serial number of the DAT file is always 0000000000000000. Everything works fine, but it might turn out to be a problem if I calibrate multiple kinects. And also, during calibration I can access the high-res (15hz) RGB camera, but when I start KinectViewer with “-high” parameter, I get this error: “Protocol error while requesting parameter subset”. Please help! Thanks!

          • Do you get a proper serial number when running “KinectUtil list”? I’ve had this issue before where I would randomly get a bogus serial number, and the Kinect would “forget” its calibration parameters as result. In my case I think it was a bad USB port — I moved to a different port, and the problem went away. You could also try resetting your Kinect via “KinectUtil reset ” or “KinectUtil reset all” and then try again.

            I have not seen the second problem yet, but then I never use the Kinects in high-res mode due to the reduced frame rate. I don’t have one here right now, or I would try immediately.

  3. I still get “device serial number 0000000000000000″ when I run KinectUtil. I tried resetting KinectUtil. I didn’t have any effect. I am running vrui on my mac, which only has 2 USB ports. I have tried both, neither of them seem to make a difference. Also, I have never gotten a ‘bogus’ serial number. The kinect always remembers the calibration parameters, but the DAT file is always 0000000000000000.

    • I’d consider “000…00″ bogus. ;) But joking aside, I don’t know what’s going wrong. I haven’t tested the Kinect package on OS X in ages. Last time I did, the software didn’t have per-device calibration parameters yet. Maybe the libusb library on OS X does serial numbers differently. I’ll have to try it again, but that’ll take me a few days. The high-res problem might be a related issue. I’ll have to ask you to hang on for a bit.

      • Thank you very much! Meanwhile, I’ll try to install it on Ubuntu, and see if the problem persists. Sorry to keep bothering you about this, but I was looking through the src and found some references to ‘server clients’ and ‘head tracking’. Have you managed to stream the live view into another system? Also, is it possible to use head tracking (with a different kinect) to orient the live view with the current program?

        • Yes, the Kinect package has full client/server support for live streaming, and also has a plug-in for Vrui’s collaboration infrastructure that allows holographic video conferencing. But setting it up properly is a bit tricky, so I haven’t made a big deal out of it thus far. But see the “All Quiet on the Martial Front” and “Collaborative Visualization of Microbialites” videos on my YouTube channel for how it works and looks.

          • Hi Oliver, sorry to keep bothering you. I hope you don’t mind. I am trying to setup the Collaboration Infrastructure on Ubuntu, but I am a bit of a noob. I can run the CollaborationServerMain program and I can see “Started server on port 26000″ with the running numbers (I am assuming this computer itself becomes the server). I also see “Local Address: Foreign Address:*” when I run ‘netstat -antp’ on terminal. But I don’t know what todo after this. How can I connect another computer as a client to this server? And also, how do I link it up with the kinects?

          • Setting up and using the collaboration infrastructure is still a bit complex at this point; the whole thing is very much under development. If you have a collaboration server running, the easiest is to connect instances of the CollaborationClientTest application to it. You can have multiple on the same computer to simulate remote collaboration. From a terminal: CollaborationClientTest -server localhost:26000 -name <some name> The -name <some name> bit is optional, but it helps keeping multiple clients on the same computer apart. For testing, I usually open two windows side-by-side, and name the left one “Left” and the right one “Right.” Just running it like this will enable input device and pointer sharing, and shared 3D drawing via the “shared curve editor” tool. If you have a webcam and a sound source, you will also get 2D video and audio streaming by default (but only the client first started gets access to the devices).

            Adding the Kinect isn’t that much more difficult (enable the “Kinect” protocol in Collaboration.cfg), but you’ll have to create extrinsic transformations to properly embed your Kinects into the physical space of the client with which they are associated, or it won’t be useful. I’ll have to write a detailed article about that at some point.

      • I tried running vrui on Ubuntu. Same problem… Just a bunch of zeroes. But I have figured out the cause of the problem. When I use the Kinect for Xbox 1414 model, everything works fine and I get the proper serial number (even on a Mac). But when I plug in model 1473 (the newer one), I can’t read the serial. Any idea on how to fix this? Thanks!

      • Even though NUI Camera serial (product ID: 0x02ae) is not read properly, the Kinect Audio (product ID: 0x02ad) has the real serial number or at least a proper identifier. Anyway I can use this serial number instead of the string of zeroes?

      • I seemed to have found a temporary fix for the problem. I just edited the “serialNumber” assignment in camera.cpp to read off the number from 0x02ad, and now it works! When I calibrate I get a proper serial number. Although it’s not the true camera device id. Just an alternative using the audio device id.

          • Happy to help out. BTW, does Vrui have any support for head tracking using kinect cameras? I was wondering if you could achieve motion parallax with multiple kinects for 3d reconstruction and a seperate kinect for head tracking. Thanks!

        • Hi! Glad to see Kinect-2.8 support KfW and 1473!

          I am trying to recreate the telepresence system shown here:

          As of now, I can use OpenNI (and a separate kinect) to retrieve the position (xyz) of the user’s head relative to the IR receiver. I can also calculate the head pos with reference to the center of my display. However, I don’t know how I can use the xyz coordinates to translate the virtual camera in VRUI. Are there any in-built navigation tools that allows me to do this?


  4. Hi Oliver,

    Thanks for your help getting the Rift up and running! Seems to be working well now, with my Hydra too – great fun playing with vrui in the Rift :)

    Just trying to set up my Kinect however and I’ve come a bit unstuck – is there a walkthrough from start to finish to get it running something like your youTube example where you can see your hands and keyboard / hydra etc?

    I can do the above steps, and when I load the RawKinectViewer I see what you’d expect (side by side depth image and full colour camera image), however when I run KinectViewer I only see the depth image in 3D (no texture overlay) – I couldn’t seem for the life of me to see how to enable this in the menus, maybe I’ve missed something obvious? Ideally I’d like to be able to set up multiple Kinects too and start experimenting with the virtual presence as that would be absolutely amazing!

    Thanks again for your help and work.

    All the best

    Chris J

    • Two reasons why you might not get a texture in KinectViewer. Sometimes when the Kinect is initialized, one of the cameras doesn’t turn on for some reason. If you run the program a second time, it normally goes back to working. But this problem happens very rarely. More probably, there is something wrong with the Kinect’s internal calibration. Did you go through the custom calibration procedure I explain in the Kinect package’s README file, and elsewhere on this blog? If not, the software will try to read the factory calibration data directly from the Kinect. That calibration isn’t great but OK, but I’ve recently gotten reports that my software can’t get the calibration data from newer Kinects. It’s possible that Microsoft changed something. Do you get any error or warning messages when running RawKinectViewer or KinectViewer, specifically something about factory calibration? When running “KinectUtil list”, do you get strange serial numbers for your devices, such as all zeros?

      Once you have KinectViewer working, putting 3D video as an inlay into other applications just requires calibration. You’ll need to tell the Kinect driver how the camera-centric coordinates from each Kinect relate to the shared physical space of Vrui. Basically, in camera space the origin is at the lens’ location, and the negative Z axis is the viewing direction. In Vrui space, by default, the origin is at the center of your screen, the X axis goes to the right, and the Z axis goes up. I don’t have a step-by-step calibration guide yet, but it boils down to measuring tie points both in camera space (via KinectViewer) and in physical space (by touching them with a Hydra handle).

      • Hi Oliver,

        Thanks again for the reply, I’ve been trying to mess around with this but seem to be hitting problems still – to answer your question re: my Kinect, the KinectUtil does list it with a valid serial number. When I load the Kinect Windows SDK utility for combined camera and depth data it renders both together in 3D fine also with pretty good accuracy so I believe the internal config is good. KienctViewer in vrui just shows the depth image, but flickering between various (seemingly pastellised) colours. I don’t have the time at the moment to make the full checkerboard unfortunately but I did try the other more simple calibration process you demonstrated (with the CD) and managed to print out a number of values – however, when I try and run CalibrateCameras, I get ‘cannot read field 0 of record 1′… I see someone else had this problem and you suggested possibly spaces or line endings in the CSV file, however I’ve tried everything and still seem to get the same error (no spaces, other CSV readers open the file fine) – it is UTF-8 format with Linux line endings. Tried to cut it down to just one record, still the same problem :-/ I’m thinking if I can at least generate the matrix file I might be able to get somewhere?

        Thanks again :)

  5. Hello Oliver,

    I am using kinect sensor and wanna calibrate by your way.
    Is it possible to use calibrated kinect (say. your way) with matlab on windows later?
    Or it only works on ubuntu?
    Or it only works with vrui?s

    • Sure. If you skip per-pixel depth correction, then the result of calibration are two 4×4 matrices. The first maps pixels from depth image space, where x and y are half-integer pixel coordinates (0.5, 1.5, …, 639.5, etc.) and z is a raw integer depth value, to 3D camera space where the origin is at the camera’s focal point, x points right, y points up, and negative z points along the viewing direction. The second matrix maps a pixel from 3D camera space into color image space for texture mapping. The matrices are written to a binary file called IntrinsicParameters-<serial number>.dat after calibration, as two sequences of 16 8-byte floating-point numbers defining the matrices in row-major order.

      Per-pixel depth correction improves calibration overall, but it’s much more complicated to handle in external software because the result are control point coefficients of two bivariate tensor-product uniform non-rational B-splines.

  6. Hi,
    I don’t know if this is the right place to post this question but I have seen the related question being asked in this conversation above.Please, delete this if it inappropriate.

    I wanted to ask if anyone has done the tele collaboration. I have been able to do it partially. I have been able to connect between client and server and can see my image in the server. Though I have to work on the calibration. But I haven’t been able to share the application between two remote location.Such that the remote user can work together in same application.

    I checked oliver videos and he does it really simply. The videos available at . He does it so simply. But I have tried alot of methods to do this. I have using the CollaborationClientTest

    In client Side:
    I runned the server
    CollaborationClientTest -name -vislet KinectViewer -navigational -p

    In Server Side:

    I runned the server

    CollaborationClientTest -vislet KinectViewer -navigational -p

    It worked, I also made necessary changes in the kinectserver and collaboration configuration file. The default prototype provided by the VRUI.

    I tried to use the shared editor tool for collaboration but didn’t work.

    Does, anyone have any idea about how to make this work. Sharing the application among the user such that remote users can work together without any barrier.


    • The basic approach right now is the following:

      1. On a server computer (can be one of the participants), run CollaborationServer and ensure the numbers count rapidly. Make note of the server computer’s host name or IP address, and the server port printed by CollaborationServer. Let’s call them <host> and <port>.
      2. On the server computer, add a firewall exception rule so that the participants can connect to the collaboration server, using the TCP port number from above.
      3. At this point, test basic functionality by running CollaborationClientTest on multiple computers:
        CollaborationClientTest -server <host>:<port>

        Once all are running, create a “Shared Curve Editor” tool from the tool menu, and draw. All participants should see the drawing.

      4. On the computer of each participant, edit Collaboration.cfg to enable the Kinect protocol by adding “Kinect” to the “protocols (…)” list.
      5. Add a firewall exception rule such that other computers can connect to the Kinect server, using the TCP port number printed when KinectServer starts.
      6. On the computer of each participant that has a Kinect camera for capture, configure and run KinectServer, create a firewall exception rule using the TCP port printed by KinectServer, and add the host name of the local computer running this instance of KinectServer, and the server port of KinectServer, to Collaboration.cfg by adding a new section “Kinect”:

        section Kinect
        kinectServerHostName <local host name>
        kinectServerPort <Kinect server port number>

      7. Then run CollaborationClientTest again on multiple computers, using the same command line as above. Do not use the KinectViewer vislet unless you want to see your own local 3D video avatar; Kinect-based 3D video avatars are handled by the “Kinect” collaboration plug-in. If you do want to see your local avatar, do not use navigational mode. An important point: if you add a -vislet <vislet name> [vislet arguments] sequence to a command line, you need to terminate that sequence with a semicolon, or Vrui will treat all following arguments as vislet arguments. That’s what might have thrown you off. Example:

        CollaborationClientTest -server <host>:<port> -vislet KinectViewer -p <local host name> <Kinect server port number> \; -name "Some name"

      8. Calibrate each Kinect camera relative to its local environment’s physical coordinate space so that the 3D video properly lines up with everything else.
      • Hi Oliver,

        Sorry for asking questions couple of times. I am stuck in a problem with video. As you has instructed I did the same procedure. I am now able to share the screen (I was able to use Share Curved Tool and share my text in different users). I have am able to see the Glyph of the the participants.

        I have three computers and One of them have the Kinect Cameras. I am unable to see the Kinect View of the Kinect Cameras. But I am able to see the Glyph of the Cameras.

        I have used ./bin/CollaborationClientTest -server ServerHostName:ServerPortNumber command.

        I haven’t used the Kinectviewer or vislet this time.

        I can share my images with you if you would like to see it.


  7. Hi,
    First of all, Thank you for those instructions.

    I tried your instructions listed above. I was able to share the application and also tried the shared curved tool and I could see the drawing drawn by from each participants in all the participated computer.
    I haven’t used the -vislet KinectViewer and -navigational but just used the CollaborationClientTest -server :

    I am able to share same workspace now. I can see the Glyph of all the participant in each of the computer. But I don’t know why I am
    unable to view the Kinect-Based 3D video avatars.

    Can you please help me with how to get the KinectVideo-3D avataars

    My Scenario is:

    I have four computers, I have used one for collaboration server, second one has the kinect camera setup with extrinsically calibrated
    and the third and fourth participants do not have any kinect cameras they are just sharing the workspace

    First Computer, CollaborationServer
    I started the CollaborationServer and the numbers count rapidly. I had the host name and the port of the Collboration server notedHi,
    First of all, Thank you for those instructions.

    I tried your instructions listed above. I was able to share the application and also tried the shared
    curved tool and I could see the drawing drawn by from each participants in all the participated computer.
    I haven’t used the -vislet KinectViewer and -navigational but just used the CollaborationClientTest -server host_name:port_name

    I am able to share same workspace now. I can see the Glyph of all the participant in each of the computer. But I don’t know why I am
    unable to view the Kinect-Based 3D video avatars.

    Can you please help me with how to get the KinectVideo-3D avataars

    My Scenario is:

    I have four computers, I have used one for collaboration server, second one has the kinect camera setup with extrinsically calibrated
    and the third and fourth participants do not have any kinect cameras they are just sharing the workspace

    First Computer, CollaborationServer
    I started the CollaborationServer and the numbers count rapidly. I had the host name and the port of the Collboration server noted

    Second Computer, Computer having Kinect Cameras Calibrated
    In this computer, I added Kinect in protocols(…) list and also added the section of kinect with its hostname and port number in Collaboration.cfg
    I started KinectServer in this computer (./bin/KinectServer)
    Kinect.cfg file has been also changed accordingly with the kinect cameras information added properly
    I used CollaborationClientTest -server : command to view the interface and it was working and we could
    draw using the Shared Curved Tool and also see the Glyph of the other participating computers

    Third Computer, Just Participating without Kinect Cameras
    In this computer, I have just added the Kinect in protocols(…) list
    I used CollaborationClientTest -server : command to view the interface and it was working and we could
    draw using the Shared Curved Tool and also see the Glyph of the other participating computers (second & fourth computer).
    But I think the kinect protocol should handle the Glyph of second computer and change it into the Kinect-based 3D Video Avataar but its not doing so ?

    Fourth Computer, Just Participating without Kinect Cameras
    In this computer, I have just added the Kinect in protocols(…) list
    I used CollaborationClientTest -server : command to view the interface and it was working and we could
    draw using the Shared Curved Tool and also see the Glyph of the other participating computers (second & third computer).
    But I think the kinect protocol should handle the Glyph of second computer and change it into the Kinect-based 3D Video Avataar but its not doing so ?

    I tried different random commands for the Kinect-based 3D video avatars but was unsuccessful.Could you please help me with this situation. I am very close to make this tele collaboration work in my environment.

    Thank you very much for your help.