PCL Developers blog

Francisco Heredia and Raphael Favier

email:f.j.heredia.soriano@tue.nl raphael.favier@gmail.com
project:Kinect Fusion extensions to large scale environments
mentor:Anatoly Baksheev

About us

Raphael graduated from a 2-year post-master degree in Software Design (OOTI) in October 2010. Since then, he has been employed as a researcher / software architect at the Video Coding and Architectures research group, working on a photorealistic 3D reconstruction.

Francisco is currently carrying his final 9-month internship for OOTI withing the VCA group. He spent the last 3 months analyzing the structure of Kinfu and is excited to extend it to larger environments.

Both Francisco and Raphael are easy-going people, with a great interest in science and new technologies.

Recent status updates

Using Kinfu Large Scale to generate a textured mesh
Friday, July 06, 2012
../../_images/srcs1.png

We added a tutorial that describes the pipeline of KinFu Large Scale in order to generate a textured mesh. We hope that this tutorial is very helpful and encourages people to share their experience with the application. We are interested also in hearing your feedback and impressions on KinFu Large Scale.

The tutorial can be found in the Tutorials section of pointclouds.org

Francisco and Raphael

Kinfu Large Scale available
Monday, June 25, 2012
../../_images/srcs1.png

We have pushed the first implementation of KinFu LargeScale to PCL trunk. It can be found under $PCL-TRUNK/gpu/kinfu_large_scale.

We will make more complete tutorial for the application. But for now we put these recommendations for those who want to try it out. Some recommendations for its use:

Make smooth movements at the time of shifting in order to avoid losing track.

Use in an environment with enough features to be detected. Remember ICP gets lost in co-planar surfaces, or big planar surfaces such as walls, floor, roof.

When you are ready to finish execution, press ‘L’ to extract the world model and shift once more. Kinfu will stop at this point and save the world model as a point cloud named world.pcd. The generated point cloud is a TSDF cloud.

../../_images/09.png

In order to obtain a mesh from the generated world model (world.pcd), run -> ” ./bin/process_kinfuLS_output world.pcd ” . This should generate a set of meshes (.ply) which can then be merged in Meshlab or similar.

../../_images/10.png

Francisco and Raphael

Kinfu Large Scale
Monday, June 18, 2012
../../_images/srcs1.png

We have been working on the implementation of Kinfu’s extension to large areas for several weeks now. This week we will start with the integration of the code to the latest trunk. However, it will be for now pushed as a separate module named Kinfu Large Scale. The reason behind this is to keep the functionality of the current KinFu, but at the same time to make available the large scale capabilities to those interested in exploring them.

The diagrams below show the distribution of classes in KinFu Large Scale, as well as a flowchart of the behaviour in the application.

../../_images/07.png ../../_images/08.png

We will post an update on the integration status by the end of this week.

Francisco and Raphael

Volume shifting with Kinfu
Thursday, May 24, 2012
../../_images/srcs1.png

Hello all,

This week, we focus on accumulating the world as we shift our volume around. The next video shows a point cloud we generated with multiple shifts. The TSDF data that is shifted out of the cube is compressed before sending it to CPU; this decreases the required bandwidth when transmitting the data.

The saved vertices are only those close to the zero-crossings (the isosurface). The saved vertices include the TSDF value for later use (raycasting, marching cubes, reloading to GPU). In the video, the two-colored point cloud represents tsdf positive (pink) and negative (blue) values.

We are now implementing a class that will manage the observed world. Each time the volume will be shifted, new observations will be sent to the world manager which will update the known world and will allow quick access to some parts of it.

../../_images/06.png

The following is a large point cloud generated and saved with our current implementation.

../../_images/05.png

Francisco and Raphael

Volume shifting with Kinfu
Friday, May 11, 2012
../../_images/srcs1.png

After a few weeks wrapping our minds around KinFu and CUDA, we have included the shifting functionality while scanning. Using the cyclic buffer technique (introduced in our previous post), we are able to shift the cube in the world.

Since we shift in slices, some information about the scene is kept in memory. this information is used to keep track of the camera pose even when we shifted the cube.

At this point, the data that is being ‘shifted out’ is currently lost, because we are clearing the TSDF volume slice to make space for the new information.

The next step is to extract the information from the TSDF volume before clearing it. This will allow us to compress it and save it to disk, or to a world model being saved to GPU/CPU memory.

We have some ideas on how to perform this compression and indexing, and we will explore them in the coming days.

At this point we are cleaning the code and adding useful comments. We want to push this to the trunk soon.

This video shows the shifting for a cube with volume size of 3 meters. The grid resolution is 512 voxels per axis.

This video shows the shifting for a cube with volume size of 1 meter. The grid resolution is 512 voxels per axis.

Francisco & Raphael