As of today I am beginning a new project “Object Tracking in Real-time 3D Reconstructions ” . My goal is to write blog post about the project as it develops. The project expands on previous work done for my bachelor thesis

The Kinect Fusion papers [1] describes the scanning algorithm that integrates scanned date over time into a TSDF volume, in order to improve quality. It does so both in a foreground volume and in a background volume. In our thesis this was not the case for the foreground volume. Instead a direct insert were used for each frame. In order to improve quality in the FG volume a separate ICP alignment must be carried out. An example of the improved quality for the foreground volume can be seen in Figure 3

Figure 1: The original kinect video from Microsoft Research

Figure 2: Demonstration of our finger paint application

What’s Next

The Kinect Fusion papers [1] describes the scanning algorithm that integrates scanned date over time into a TSDF volume, in order to improve quality. It does so both in a foreground volume and in a background volume. In our thesis this was not the case for the foreground volume. Instead a direct insert used in each each frame. In order to improve quality in the FG volume a separate ICP alignment must be carried out. An example of the improved quality for the foreground volume can be seen in Figure 3

Foreground ICP Alignment

ICP Alignment in the foreground volume can have some interesting additional applications. Not only can the quality be improved but it is also possibly to track orientation and position relative to the camera of a foreground objects Figure 4.

Project Goal

An implementation of the object position and rotation tracking is the main goal of this project. However first quality improvement of the foreground should be implemented.

Figure 3: Improvement in scanning quality when using ICP and on the foreground volume. Notice how the quality of the foreground scanning are of higher quality in the bottom row. Image from [1]

Figure 4: In the original kinect video it is demonstrated how a real world object are aligned with a scanning of itself. By doing so the orientation and position of the object are known

References

[1]: Kinect Fusion https://research.microsoft.com/en-us/projects/surfacerecon/

[2]: Human Interaction on Real-time Reconstructed Surfaces: Jeppe U. Walther & Søren V. Poulsen

[3]: Point Cloud Library: http://pointclouds.org/