PCL Developers blog

Sandia Code Sprint

Sandia Intelligent Systems and Robotics is sponsoring 3D algorithmic work for surface reconstruction and large scale mapping for PCL.

../_images/pcl-srcs.png

For a complete list of all the present and past PCL code sprints please visit http://www.pointclouds.org/blog.

Developers

Click on any of the links below to find out more about our team of PCL developers that are participating in the sprint:

Latest 15 blog updates

Initial results on (new) SSD
Thursday, July 26, 2012
../_images/srcs.png

I have been able to reformulate the mathematical framework used in SSD for implementation in PCL, without having to explicitly allocate primal and dual graphs defined over the octree. Over the past few weeks, I have implemented this extended formulation and have done many experiments to figure out the range of regularization parameters involved in the optimization. The results look very good.

There are 3 independent tasks in the implementation:

  1. Given a point cloud with normals, construct an octree (Julius’ extended octree does this)
  2. Given an octree, estimate scalar field per leaf (My extended SSD will do this)
  3. Given a scalar field defined defined over octree leaves, either render the volume or extract isolevel zero as a polygonal mesh ( for the latter there is a need for implementation of Dual Marching cubes algorithm )
../_images/alternative_impl_results.png

SSD with standard implementation (left) vs. proposed implementation (right). Given that the results obtained by using new formulation are quite comparable to ones obtained by using the standard implementation, I now proceed to incorporate this implementation into PCL.

Using Kinfu Large Scale to generate a textured mesh
Friday, July 06, 2012
../_images/srcs1.png

We added a tutorial that describes the pipeline of KinFu Large Scale in order to generate a textured mesh. We hope that this tutorial is very helpful and encourages people to share their experience with the application. We are interested also in hearing your feedback and impressions on KinFu Large Scale.

The tutorial can be found in the Tutorials section of pointclouds.org

Francisco and Raphael

Intermediate results
Friday, June 29, 2012
../_images/srcs2.png

I’m graduated from my University. Now I have master degree and I have enough time to work at project. I figured out and implemented in Matlab surface editing based on the coordinates of Laplace. This conversion is the basis of the method that I have to implement in the Code Sprint. Implemented algorithm is Laplacian Mesh Editing consists of the following steps:

  1. Based on information about the relative positions of surface points computed the Laplacian operator of the mesh.
  2. Fix points whose position should not change - “static anchors”.
  3. Choose the point, whose position should be changed, and specify the offset of its origin - “handle anchors”.
  4. Based on available information, we construct the normal equations. For the static anchors we set big weight, and for the handle anchors little weight.
  5. The new coordinates of the surface are calculated based on the method of least squares.

The results of the program are presented below.

../_images/Man.png ../_images/Circ.png

Now I proceed to implement the algorithm from article “Template Deformation for Point Cloud Fitting”.

Kinfu Large Scale available
Monday, June 25, 2012
../_images/srcs1.png

We have pushed the first implementation of KinFu LargeScale to PCL trunk. It can be found under $PCL-TRUNK/gpu/kinfu_large_scale.

We will make more complete tutorial for the application. But for now we put these recommendations for those who want to try it out. Some recommendations for its use:

Make smooth movements at the time of shifting in order to avoid losing track.

Use in an environment with enough features to be detected. Remember ICP gets lost in co-planar surfaces, or big planar surfaces such as walls, floor, roof.

When you are ready to finish execution, press ‘L’ to extract the world model and shift once more. Kinfu will stop at this point and save the world model as a point cloud named world.pcd. The generated point cloud is a TSDF cloud.

../_images/09.png

In order to obtain a mesh from the generated world model (world.pcd), run -> ” ./bin/process_kinfuLS_output world.pcd ” . This should generate a set of meshes (.ply) which can then be merged in Meshlab or similar.

../_images/10.png

Francisco and Raphael

Kinfu Large Scale
Monday, June 18, 2012
../_images/srcs1.png

We have been working on the implementation of Kinfu’s extension to large areas for several weeks now. This week we will start with the integration of the code to the latest trunk. However, it will be for now pushed as a separate module named Kinfu Large Scale. The reason behind this is to keep the functionality of the current KinFu, but at the same time to make available the large scale capabilities to those interested in exploring them.

The diagrams below show the distribution of classes in KinFu Large Scale, as well as a flowchart of the behaviour in the application.

../_images/07.png ../_images/08.png

We will post an update on the integration status by the end of this week.

Francisco and Raphael

Laplacian mesh editing: MATLAB realization
Monday, June 04, 2012
../_images/srcs2.png

I almost not was working with the project the last two weeks. I was making report on my master’s thesis. Almost completed work on it. Resume work on the project: now I will work on Matlab realization deformation of the surface.

Volume shifting with Kinfu
Thursday, May 24, 2012
../_images/srcs1.png

Hello all,

This week, we focus on accumulating the world as we shift our volume around. The next video shows a point cloud we generated with multiple shifts. The TSDF data that is shifted out of the cube is compressed before sending it to CPU; this decreases the required bandwidth when transmitting the data.

The saved vertices are only those close to the zero-crossings (the isosurface). The saved vertices include the TSDF value for later use (raycasting, marching cubes, reloading to GPU). In the video, the two-colored point cloud represents tsdf positive (pink) and negative (blue) values.

We are now implementing a class that will manage the observed world. Each time the volume will be shifted, new observations will be sent to the world manager which will update the known world and will allow quick access to some parts of it.

../_images/06.png

The following is a large point cloud generated and saved with our current implementation.

../_images/05.png

Francisco and Raphael

Laplacian mesh editing: MATLAB realization
Sunday, May 13, 2012
../_images/srcs2.png

Now I try implement algorithm for 3D mesh editing base on Laplacian matrix. I decide to do it in MATLAB for verify my idea. If it will work I will implement it in C++.

Volume shifting with Kinfu
Friday, May 11, 2012
../_images/srcs1.png

After a few weeks wrapping our minds around KinFu and CUDA, we have included the shifting functionality while scanning. Using the cyclic buffer technique (introduced in our previous post), we are able to shift the cube in the world.

Since we shift in slices, some information about the scene is kept in memory. this information is used to keep track of the camera pose even when we shifted the cube.

At this point, the data that is being ‘shifted out’ is currently lost, because we are clearing the TSDF volume slice to make space for the new information.

The next step is to extract the information from the TSDF volume before clearing it. This will allow us to compress it and save it to disk, or to a world model being saved to GPU/CPU memory.

We have some ideas on how to perform this compression and indexing, and we will explore them in the coming days.

At this point we are cleaning the code and adding useful comments. We want to push this to the trunk soon.

This video shows the shifting for a cube with volume size of 3 meters. The grid resolution is 512 voxels per axis.

This video shows the shifting for a cube with volume size of 1 meter. The grid resolution is 512 voxels per axis.

Francisco & Raphael

Reimplementing SSD
Monday, May 07, 2012
../_images/srcs.png

I am currently reimplementing SSD using PCL. It looks like I need additional datastructures defined over the octree, such as primal and dual graphs. I am investigating if the current octree version supports those graphs. Otherwise, I will implement these. As timing for this project, I am planning to deliver the software (and a report for the sponsors) by June 18.

Update
Friday, April 27, 2012
../_images/srcs1.png

It’s been some time since our last post, and we have been learning a whole lot about CUDA and Kinfu itself.

Two big points have been done so far:

# We analyzed the application from a complete perspective, to try to determine which are its strong points and its improvement areas. What we found when analyzing Kinfu looks somewhat like this figure:

../_images/03.png

The first impression is that Kinfu has clearly defined modules, which are reflected in the code distribution among the different files. However, the application requires that the functionalities and information shared between these modules is tightly coupled. It can be seen in several parts of the code that the parameters just keep cascading down the calls all the way from kinfu_app.cpp (highest level) to the GPU kernel calls (e.g. raycasting or integration operations). The main reason for this constant copying of parameters and long parameters list is precisely that the modules are separated from each other.

This might sound basic to experienced CUDA users, but we find it good to remind it there, as this was one of our main “aha moment”.

For example, if one was to declare a variable in internal.h - which is a header file included in all modules (via device.hpp) -, what you would obtain is a copy of this variable for every CUDA module, instead of getting only one of them accesible from all the modules. Once again, this is the result of having the modules compiled as independent.

After discussing with Anatoly, it has been defined that in the mid-term all the internal functionalities of Kinfu will be consolidated into a single TsdfVolume class which will contain all operations for integration, ICP, raycasting, and memory handling. This will result in a more readable code, avoid long parameter lists between operations, while keeping the real-time performance of the current implementation. In other words, at a higher level the code will be clearer and more concise, while at low level it will have the same performances.

# We have been working on the solution to tackle the scalability limitations in Kinfu. The expected behavior can be described as follows: Whenever the scene is being scanned, the user may approach the borders of the current cube. At that moment, part of the existing information about the cube must be compressed and stored to either GPU memory, Host RAM, or HDD to be saved. For now, the latter two are not in scope.

The cube that we are reconstructing is shifted, but it is partially overlapped with the previous cube. In other words, we will see a scene that is partially empty, but still contains part of the information of the previous cube. The origin of the TsdfVolume will be shifted as well, depending on where we approached the border of the initial scene. We believe that adding this overlapped information will help to estimate the pose of the camera once the new cube is loaded. This shift is handled via a circular 3D buffer implementation in the TsdfVolume class.

For clarity, we show this process in the figure below.

../_images/04.png

Francisco & Raphael

Laplacian matrix
Wednesday, April 25, 2012
../_images/srcs2.png

Last week, I wrote the program for calculation the Laplace matrix with uniform and cotangent weights (Described in the article Laplacian Mesh Optimization). Now I want to write a program for smoothing based on the Laplacian matrix.

Porting SSD
Sunday, April 22, 2012
../_images/srcs.png

On Monday (4/16), I had a short discussion with my mentor about how to improve surface reconstruction code in PCL. We started porting our SSD software based on the work SSD: Smooth Signed Distance Surface Reconstruction. The code hasn’t been pushed to the trunk yet. Once this is ready, it will provide a base for us to make further improvements. I am going to make another blog post soon to share our improvement plans.

Description of the algorithm
Monday, April 16, 2012
../_images/srcs2.png

The steps of the chosen algorithm:

../_images/algorithm.jpg

User-program interaction:

  1. User must choose the most appropriate template for the input point cloud.
  2. User must identifies and marks pairs of corresponding points on template and point data, defines a local frame for every marked point (Fig. 1 (a, b)).

The program:

  1. From the selected correspondences, we compute the initial deformation of the template. We compute Laplacian coordinates of the template and estimate local rotations for every pair of corresponding points.
  2. Make initial deformation (Fig.1 (c)).
  3. We estimate a global scaling factor for the template which is applied to the Laplacian coordinates, to consider the fact that template and input data may be (and generally are) scaled differently may distort the resulting shape in an unacceptable way . Make new deformation (Fig.1 (d)).
  4. Iterative process moves the template nearer towards the data points guided by local correspondences which are established from simple heuristics. This local matching is motivated by iterative closest point (ICP) algorithms for finding (rigid) transformations for shape registration (Fig.1 (e-g)).
  5. We improve the remaining regions of the deformed template for which no counterparts exist in the data (Fig.1 (h)).

The first phase of the realization described algorithm, I will write code to calculate the initial approximation and estimate the coordinates of Laplace

The chosen algorithm
Saturday, April 07, 2012
../_images/srcs2.png

This week in Wednesday we chose algorithm for implementation: Template Deformation for Point Cloud Fitting. I think that it will be first iteration for common algorithm: Template matching using 2D + 3D.

Now I’m researching algorithm and thinking about its implementation. I will have consultation with my mentor next week. After this I will describe here basic steps of implementation and will begin realization.