I finally managed to upload a pair of tutorial videos showing how to use the new grid-based intrinsic calibration procedure for the Kinect camera. The procedure made it into the Kinect package at least 1.5 years ago, but somehow I never found the time to explain it properly. Oh well. Here are the videos: Intrinsic Kinect Camera Calibration with Semi-transparent Grid and Intrinsic Kinect Camera Calibration Check.
Unlike the initial calibration method, which used a simple prop, this one requires a rather complex calibration target (on the upside, this one actually does calculate, and not just guesstimate, reprojection parameters, so it leads to physically accurate 3D reconstruction). To wit, it needs a semi-transparent checkerboard (see Figure 1). Checkerboards are standard camera calibration props, so that’s a no-brainer, but why semi-transparent? The answer is simple: A regular black-and-white checkerboard looks like a checkerboard to a color camera, but it looks like a simple rectangle to the Kinect’s depth camera, because that one is entirely colorblind. A semi-transparent checkerboard, on the other hand, will look like a checkerboard to both cameras. This enables a procedure where both cameras can be calibrated at the same time, and with respect to each other. The latter is very important, because without it, the resulting 3D reconstructions would have mismatches between 3D geometry and color texture.
Building a precise semi-transparent checkerboard is not simple. Here’s one approach that worked really well for me: First, print a large grid (with very thin grid lines) onto a large piece of paper, for example using a large-format printer or plotter. My target has 7 by 5 grid tiles (both need to be odd numbers), and each grid tile is 3.5″ x 3.5″. So the overall grid size is 24.5″ x 17.5″. Here is a PDF file to print this grid; many copy shops have large-format printers. The grid should not be smaller, because otherwise it would be too small to reliable calibrate the Kinect for larger distances, and it should not be much larger, because it would not fit into the Kinect’s field-of-view up close. My target more or less fills the Kinect’s field-of-view at the Kinect’s minimum viewing distance, but is large enough to be used up to about 2m away (which is the practical upper range on the Kinect’s depth perception, anyway).
Then, buy a sufficiently large piece of plate glass (about 35″ x 28″ to leave around 5″ of border around the grid), and glue the entire printed grid to the glass plate so that it is roughly centered.
Then, use a long metal ruler and a very sharp knife to cut along all grid lines, horizontally and vertically. Ensure that all grid tiles are cleanly separated from each other.
Finally, carefully peel off all odd grid tiles, ensuring to leave all corner tiles in place, and carefully remove any glue residue from the now transparent grid tiles.
The result should be a very precise regular grid. I found that the alternative process, cutting out individual 3.5″ x 3.5″ grid tiles and glueing them to the glass individually, is not nearly as precise. The tiles won’t line up properly, and the resulting grid will not be exactly rectangular. It also takes a lot longer.
Then, once you have the calibration target, follow the instructions in the first video to capture a large enough number of calibration tie points. Update: As of version 2.8 of the Kinect package, there is a slight change in the procedure to bind the grid drawing tool, and this change is not reflected in Kinect 2.8’s README file due to a packaging oversight. Instead of using buttons “1”-“5” when binding the “Draw Grids” tool, use buttons “1”, “W”, “2”, “3”, “4”, and “5”, in that order. Afterwards, calibration proceeds as shown in the video.
I recommend taking tie points from at least four different distances, starting at the closest distance where the Kinect can reliably see the grid, and working up to the maximum working distance at which you intend to use the Kinect. For each of the distances, capture one tie point head on, and two tie points from increasing angles, if possible going up to an almost grazing angle (in practice, say around 60°). Take one view from the left, and one (with a different angle) from the right. This results in twelve calibration tie points (four distances from three angles each). The at-an-angle poses are very important to establish constraints on the depth conversion formula that converts raw depth values reported by the Kinect into real 3D distances.
Afterwards, check the new calibration by following the instructions in the second video. Visually, the color image and 3D geometry should line up well. Specifically, check the edges of the grid tiles for alignment. Then measure the reconstructed size of the target, and compare it to the known real size. If the difference is larger than acceptable (about 1mm for an up-close target is achievable), redo the calibration with slightly different poses. The Kinect’s depth images are very blotchy due to the interpolation method that converts the raw, scattered depth measurements into a continuous depth image, and this blotchiness makes it difficult to align the observed and virtual grids (and prohibits an automatic matching method in the first place). Practice makes perfect.
One thing to keep in mind is that the current calibration procedure does not account for lens distortion. In the Kinect, lens distortion has three effects:
- Radial distortion in the color camera. This is lens distortion as it’s normally understood. It’s effects are that the manually-drawn virtual grids don’t exactly match the observed grid.
- Radial distortion in the depth camera. The depth camera is a virtual camera; it is the result of displacement matching on the IR projector and real IR camera, both of which have their own lens distortions. The depth camera is already rectified, but there is some subtle distortion left over.
- Depth distortion in the depth camera. The secondary effect of lens distortion in the real IR camera is radially increasing depth distortion. This manifests in the 3D reconstruction of a flat surface looking a little like a dinner plate.
The first effect has well-known remedies and calibration procedures, but the Kinect software doesn’t contain them yet. The second and third effects are specific to the Kinect, and there are no tried and true correction methods. I have an experimental depth distortion correction procedure, but it’s not fully integrated into all Kinect applications, and the depth radial distortion is so subtle that it is drowned out by the depth image’s blotchiness — it is measurable in the final 3D reconstruction, but I’m not sure how to measure it for calibration purposes.
The bottom line is that intrinsic calibration is not yet reliable at the edges of the depth image, and therefore the grid target should be lined up so that it is roughly centered in the depth camera’s frame for each of the calibration tie points.
Pingback: Multi-Kinect camera calibration | Doc-Ok.org
Pingback: Kinect factory calibration | Doc-Ok.org
May I ask which driver has been used for this software. Actually all I could find out was being used on Linux. I want to know if there is any way to use it on windows?
thanks.
The driver is the Kinect package that’s linked from the top of this post, http://idav.ucdavis.edu/~okreylos/ResDev/Kinect . It is based on the Vrui framework ( http://idav.ucdavis.edu/~okreylos/ResDev/Vrui ), but doesn’t use any external Kinect software such as OpenNI or libfreenect.
However, neither the Vrui framework nor the Kinect package work well under Windows. You’d either have to run them under the cygwin emulation layer, or inside a virtual machine running Linux. Either way, performance will be very poor.
Good day, this is an interesting article since I’m in the process of calibrating an xtion (equivalent to the kinect) with an HD image camera.
I was curious about the motivations behind using the depth image rather than using the infra-red image for calibrating the two images since this would allow automatic calibration (i.e. non fussy image of the depth image).
The depth image is based on the IR image right? So the deformation of the depth image should follow the deformation of the IR image?
Moreover, I would need a reference on this one, but I think the depth image is already calibrated so there shouldn’t be any intrinsic calibration needed for this one.
It’s true, most others do indeed use the IR camera directly for calibration (factory calibration, Burrus, et al.). There are two main reasons for my approach. One is legacy; I started doing this before anybody really knew how to get at the raw IR image, and once you have the workflow all set up, it’s hard to change approaches completely. 🙂
The second reason is to see whether a “direct” approach, calibrating the depth camera on the depth image itself, can yield better results than the “indirect” approach, basing it on the observed IR camera and the unknown IR projector. At least judging by the Kinect’s factory calibration (see related post), which uses “indirect” calibration, the depth-based approach can do much better. One reason is that the factory calibration uses a depth displacement term that’s parallel to the IR camera’s horizontal axis, which is not necessarily realistic. Other custom approaches might not do that, but I haven’t gotten around comparing this method to the other IR camera-based ones. I’ve seen several papers describing 3D capture methods based on Burrus’ calibration method, and their results weren’t great.
Regarding depth image calibration: the Kinect’s firmware does take steps to rectify the depth image, but there is still significant non-linear distortion. When pointing the Kinect at a flat surface, the depth image looks more like a bowl. This hints at uncorrected lens distortion either in the IR camera or in the IR projector itself. The depth-based approach can correct for this very easily by capturing per-pixel depth correction coefficients, but the same would be very hard to impossible using purely camera-based approaches, because the IR projector is never observed during calibration (usually taped shut, actually) and assumed to be “perfect.”
Pingback: Installing and running first Vrui applications | Doc-Ok.org
Hi Okreylos, is there a method to convert your .color and .depth files into pcd-files for further processing with methods from the point cloud library? By the way, thank you for writing such an extensive explanation of your calibration procedures.
There is the Kinect::FileFrameSource class, which reads depth and color frames from a pair of .depth/.color files. You can read one pair of frames at a time and use the intrinsic parameters stored in the .depth file to project the raw depth images into 3D camera space. You can then colorize the projected images by mapping the associated color frame onto it, and save each projected and colorized depth image pixel as a 3D point to a file in point cloud library format. The Kinect::Projector class shows how exactly to project depth pixels and colorize them.
I don’t manage to perform the conversion to pcd, can you help me get going a little bit? I’m still a beginning C++ programmer.
Well, it basically goes like this:
#include <iostream>
#include <Math/Constants.h>
#include <Geometry/Point.h>
#include <Geometry/ProjectiveTransformation.h>
#include <Kinect/FileFrameSource.h>
typedef Kinect::FrameSource::DepthCorrection DC;
typedef DC::PixelCorrection PC;
typedef Kinect::FrameSource::IntrinsicParameters::PTransform PTransform;
typedef PTransform::Point Point;
/* Open file pair: */
char* colorFileName=...;
char* depthFileName=...;
Kinect::FileFrameSource source(colorFileName,depthFileName);
/* Get source's per-pixel depth correction parameters: */
DC* dc=source.getDepthCorrectionParameters();
PC* pc=dc->getPixelCorrection(source.getActualFrameSize(Kinect::FrameSource::DEPTH));
delete dc;
/* Get source's intrinsic parameters: */
Kinect::FrameSource::IntrinsicParameters ips=source.getIntrinsicParameters();
/* Read frames until done: */
while(true)
{
/* Read next depth frame: */
Kinect::FrameBuffer depthFrame=source.readNextDepthFrame();
if(timeStamp==Math::Constants<double>::max) // Bail out on end-of-file
break;
/* Process all valid pixels in the current depth frame: */
unsigned short* pixPtr=static_cast<unsigned short*>(depthFrame.getBuffer());
PC* pcPtr=pc;
for(int y=0;y<depthFrame.getSize(1);++y)
for(int x=0;x<depthFrame.getSize(0);++x,++pixPtr,++pcPtr)
if(*pixPtr!=Kinect::FrameSource::invalidDepth) // Check if pixel is valid
{
/* Create a depth-corrected 3D pixel in depth image space: */
Point img(double(x)+0.5,double(y)+0.5,double(pcPtr->correct(*pixPtr)));
/* Unproject the pixel to camera space: */
Point cam=ips.depthProjection.transform(img);
/* Save camera space point's coordinates to output file: */
std::cout<<cam[0]<<", "<
Instead of printing the camera-space pixels to stdout, you'd write them to the PCL point file of course.
Note to self: never paste C/C++ code into a WordPress comment.
Who published this article?
Well, I guess that would be me.
Hi
I am trying to calibrate my Kinect (1414) with your Kinect-2.8 package on OSX 10.7.5. I installed vrui 3.1-002 via Homebrew and was able to make and install the kinect package.
running
./bin/KinectUtil getCalib 0
gives me the intrinsic parameter, but
running
./bin/KinectViewer -c 0
or
./bin/RawKinectViewer 0
opens a windown, closes it again and I get the following line in the terminal:
Segmentation fault: 11
maybe important: I got some warnings when compiling the kinect-2.8 package:
—- Kinect configuration options: —-
CPU-based facade projector selected
—- Kinect installation configuration —-
Root installation directory: /usr/local/Cellar/vrui/3.1-002-1
Calibration data directory: /usr/local/etc/vrui/Kinect-2.8
Resource data directory: /usr/local/Cellar/vrui/3.1-002-1/share/vrui/Kinect-2.8
Vislet plug-in directory: /usr/local/Cellar/vrui/3.1-002-1/lib/vrui/VRVislets
—- End of Kinect configuration options: —-
Compiling Kinect/Camera.cpp…
Compiling Kinect/ColorFrameReader.cpp…
Compiling Kinect/ColorFrameWriter.cpp…
Compiling Kinect/DepthFrameReader.cpp…
Compiling Kinect/DepthFrameWriter.cpp…
Compiling Kinect/FileFrameSource.cpp…
Compiling Kinect/FrameReader.cpp…
Compiling Kinect/FrameSaver.cpp…
Compiling Kinect/FrameSource.cpp…
Compiling Kinect/FrameWriter.cpp…
Compiling Kinect/HilbertCurve.cpp…
Compiling Kinect/LossyDepthFrameReader.cpp…
Compiling Kinect/LossyDepthFrameWriter.cpp…
Compiling Kinect/Motor.cpp…
Compiling Kinect/MultiplexedFrameSource.cpp…
Compiling Kinect/Projector.cpp…
Compiling Kinect/Renderer.cpp…
Compiling Kinect/ShaderProjector.cpp…
Linking /Users/maybites/Arbeiten/02_code/others/Kinect-2.8/lib/libKinect.g++-3.2.8.dylib…
ld: warning: directory not found for option ‘-L/lib’
Compiling KinectUtil.cpp…
Linking bin/KinectUtil…
ld: warning: directory not found for option ‘-L/lib’
Compiling PauseTool.cpp…
Compiling TiePointTool.cpp…
Compiling LineTool.cpp…
Compiling DepthCorrectionTool.cpp…
Compiling GridTool.cpp…
Compiling PlaneTool.cpp…
Compiling PointPlaneTool.cpp…
Compiling CalibrationCheckTool.cpp…
Compiling RawKinectViewer.cpp…
Linking bin/RawKinectViewer…
ld: warning: directory not found for option ‘-L/lib’
Compiling AlignPoints.cpp…
Linking bin/AlignPoints…
ld: warning: directory not found for option ‘-L/lib’
Compiling KinectServer.cpp…
Compiling KinectServerMain.cpp…
Linking bin/KinectServer…
ld: warning: directory not found for option ‘-L/lib’
Compiling KinectViewer.cpp…
Linking bin/KinectViewer…
ld: warning: directory not found for option ‘-L/lib’
Compiling Vislets/KinectViewer.cpp…
Linking /Users/maybites/Arbeiten/02_code/others/Kinect-2.8/lib/Vislets/libKinectViewer.bundle…
ld: warning: directory not found for option ‘-L/lib’
Compiling Vislets/KinectRecorder.cpp…
Linking /Users/maybites/Arbeiten/02_code/others/Kinect-2.8/lib/Vislets/libKinectRecorder.bundle…
ld: warning: directory not found for option ‘-L/lib’
Compiling Vislets/KinectPlayer.cpp…
Linking /Users/maybites/Arbeiten/02_code/others/Kinect-2.8/lib/Vislets/libKinectPlayer.bundle…
ld: warning: directory not found for option ‘-L/lib’
Creating configuration makefile fragment…
Hi, thank you for the extensive information on calibration procedure. Anyway, I have installed the program and run the RawKinectViewer to calibrate the target, the camera frame becomes static unless I moved a mouse in the viewer window. Worse when I tried to access KinectViewer -c 0, the new KinectViewer window opens, but display nothing. Please advise.
I’m wondering if this was answered!? I just got Vrui and Kinect running tonight on a macbook running Ubuntu 14.04, but noticed the RawKinectViewer pauses unless I wiggle the mouse or tap the keyboard…
Curious what causes this, and if there was a workaround?
Thank you.
Sorry, it should be noted that I got Vrui-3.2-002, and Kinect-2.8 running.
Try clicking the left mouse button in the application window first thing, and see if that fixes it. There is a problem with click-to-focus window managers in -3.1-002, and this might be it. It will be fixed in the next version.
Thank you so much okreylos for your prompt reply!
Unfortunately, I see no difference no matter how vigorously or quickly I click in the window first thing. I look forward to next version. 😉
I had a quick question. If I haven’t calibrated the video and depth streams, in the app KinectViewer, I should only expect to see Gray/Depth stream correct!?
I haven’t build a fancy DVD-Calibrator yet, but I was poking around eagerly to see all the toys.
Finally, do you have any plans or ETA on any KinectV2 support!?
What about skeleton channel support for V1 or V2?
( Just curious really )
Thank you for all your work! It’s really incredible and fun.
cheers!
Without calibration, the software will use default calibration parameters that kind of work for most Kinects. You can also download the Kinect’s factory calibration using the KinectUtil tool, that will be a lot better:
$ KinectUtil getCalib 0
I’m working on Kinect v2 support, but it’s moving slowly mostly due to low-level USB issues.
Hi Okreylos, i installed Kinect-2.8, under fedora 20 (x86_64) 64 bits fully updated, libusbx, libusb, with 1473 kinect hardware. I maded KinectUtil getCalib 0. And when i run KinectViewer -c 0 (i’ve only 1 kinect), the IR sensor is ON, but i receive a black screen. But, when i run RawKinectViewer 0 i receive both images correctly. I have Vrui-3.1-002, Kinect-2.8, libusb 0.1.5-3.fc20, libusbx 1.0.18-1.fc20. Thanks for your help.
Sorry about that; it’s a problem that slipped through when I packaged 2.8. The issue is that my own calibration uses centimeters as unit for the camera-centric 3D space, whereas the factory calibration data uses millimeters. As a result, the Kinect’s image is initially out of view when starting KinectViewer. You just need to zoom out a bit, fastest by rolling the mousewheel away from you, until the image comes into view. Once you have it, you can save the current view via Vrui System -> View -> Save View in the main menu, and then load the saved view by passing -loadView on the command line of KinectViewer.
Thank you Okreylos, The getCalib 0 made the difference!
I’m still dealing with 2 issues, one you’re aware of, the ‘click-to-focus’ issue with any of the Kinect display windows. But also, an odd ‘glitchy’ intermittent display error that pops in for a frame every so often… It manifests as pink/green errors on the depth channel, and video static on the video channel. I’m willing to believe it’s the macbook hardware, but can’t verify.
I’m going to continue to study your projects. I’m keen to attempt to recreate your presence demo with the 3 kinects, occulus and hydra. I’m skeptical this hardware is the way forward, but I’ll try anyway. 😉
Thank you again for taking the time to respond.
Actually, I think I know what might be happening. I run a 2008 Macbook Pro myself, with Fedora 20 on it. I have used it to drive Kinects, and yes, there are weird glitches. I think what might be happening is that the Macbook puts its USB ports to sleep rather quickly if there is no user interaction.
I just remembered that I sometimes have to do the same thing, move the mouse cursor around rapidly, to get live updates. No real idea why that is exactly, but it’s only happened to me on my Macbook so far. Try adding -compress to KinectViewer’s command line before -c 0. That seems to help sometimes.
Interesting.
Yeah, it happens as soon as I stop interacting with it.
If I just hold down the down-arrow on the keyboard, it behaves properly.
I’ll see if I can just send a repeating input command to fool it into thinking we have user interaction. *shrug*
Thanks again.
Hi Okreylos,
I’m trying to use your Kinect Package to calibrate my XBox One Kinect sensor (hopefully this model is compatible). I installed the package and Vrui, but when I run RawKinectViewer, I am only getting the color image (on the right side of the screen). On the left side I am getting a blank image (completely white) that is slightly distorted in shape. I am not getting the depth image that appears in your video tutorials and many other websites. Any help would be greatly appreciated, thank you!
Kinect V2 (model 1595) is what I have
It should work, but the Kinect v2 is temperamental. Sometimes it’s enough to close
RawKinectViewer
and start it again for the depth stream to start working. If it doesn’t, try plugging your Kinect into different USB 3 ports, or even try an external USB 3 hub. If nothing works, there might be something wrong with your Kinect’s AC power / USB adapter.Most importantly, though, the calibration procedure described in this article is unnecessary for Kinect v2 (and mostly obsolete for Kinect v1, as the built-in calibration parameters are good enough for most purposes). Once you get a depth stream displayed in
RawKinectViewer
, retrieve built-in calibration data withsudo KinectUtil getCalib 0
and go straight toKinectViewer -c 0
to see the texture-mapped live 3D reconstruction.Hi,
This program is very good but there is an error with the calibration, with Grid Tool the depth lens distortion parameters are not save in the .dat file and cause a read error when you use it with CameraV2.cpp because 40 bytes is missing (5 floats of 64bits). What are the variables to save in GridTool.cpp to run correctly KinectViewer ?
Thank you in advance for your answer.
Enaxadrel