PCL Developers blog

All blog posts

Initial results on (new) SSD
Thursday, July 26, 2012
../_images/srcs.png

I have been able to reformulate the mathematical framework used in SSD for implementation in PCL, without having to explicitly allocate primal and dual graphs defined over the octree. Over the past few weeks, I have implemented this extended formulation and have done many experiments to figure out the range of regularization parameters involved in the optimization. The results look very good.

There are 3 independent tasks in the implementation:

  1. Given a point cloud with normals, construct an octree (Julius’ extended octree does this)
  2. Given an octree, estimate scalar field per leaf (My extended SSD will do this)
  3. Given a scalar field defined defined over octree leaves, either render the volume or extract isolevel zero as a polygonal mesh ( for the latter there is a need for implementation of Dual Marching cubes algorithm )
../_images/alternative_impl_results.png

SSD with standard implementation (left) vs. proposed implementation (right). Given that the results obtained by using new formulation are quite comparable to ones obtained by using the standard implementation, I now proceed to incorporate this implementation into PCL.

Using Kinfu Large Scale to generate a textured mesh
Friday, July 06, 2012
../_images/srcs1.png

We added a tutorial that describes the pipeline of KinFu Large Scale in order to generate a textured mesh. We hope that this tutorial is very helpful and encourages people to share their experience with the application. We are interested also in hearing your feedback and impressions on KinFu Large Scale.

The tutorial can be found in the Tutorials section of pointclouds.org

Francisco and Raphael

Intermediate results
Friday, June 29, 2012
../_images/srcs2.png

I’m graduated from my University. Now I have master degree and I have enough time to work at project. I figured out and implemented in Matlab surface editing based on the coordinates of Laplace. This conversion is the basis of the method that I have to implement in the Code Sprint. Implemented algorithm is Laplacian Mesh Editing consists of the following steps:

  1. Based on information about the relative positions of surface points computed the Laplacian operator of the mesh.
  2. Fix points whose position should not change - “static anchors”.
  3. Choose the point, whose position should be changed, and specify the offset of its origin - “handle anchors”.
  4. Based on available information, we construct the normal equations. For the static anchors we set big weight, and for the handle anchors little weight.
  5. The new coordinates of the surface are calculated based on the method of least squares.

The results of the program are presented below.

../_images/Man.png ../_images/Circ.png

Now I proceed to implement the algorithm from article “Template Deformation for Point Cloud Fitting”.

Kinfu Large Scale available
Monday, June 25, 2012
../_images/srcs1.png

We have pushed the first implementation of KinFu LargeScale to PCL trunk. It can be found under $PCL-TRUNK/gpu/kinfu_large_scale.

We will make more complete tutorial for the application. But for now we put these recommendations for those who want to try it out. Some recommendations for its use:

Make smooth movements at the time of shifting in order to avoid losing track.

Use in an environment with enough features to be detected. Remember ICP gets lost in co-planar surfaces, or big planar surfaces such as walls, floor, roof.

When you are ready to finish execution, press ‘L’ to extract the world model and shift once more. Kinfu will stop at this point and save the world model as a point cloud named world.pcd. The generated point cloud is a TSDF cloud.

../_images/09.png

In order to obtain a mesh from the generated world model (world.pcd), run -> ” ./bin/process_kinfuLS_output world.pcd ” . This should generate a set of meshes (.ply) which can then be merged in Meshlab or similar.

../_images/10.png

Francisco and Raphael

Kinfu Large Scale
Monday, June 18, 2012
../_images/srcs1.png

We have been working on the implementation of Kinfu’s extension to large areas for several weeks now. This week we will start with the integration of the code to the latest trunk. However, it will be for now pushed as a separate module named Kinfu Large Scale. The reason behind this is to keep the functionality of the current KinFu, but at the same time to make available the large scale capabilities to those interested in exploring them.

The diagrams below show the distribution of classes in KinFu Large Scale, as well as a flowchart of the behaviour in the application.

../_images/07.png ../_images/08.png

We will post an update on the integration status by the end of this week.

Francisco and Raphael

Laplacian mesh editing: MATLAB realization
Monday, June 04, 2012
../_images/srcs2.png

I almost not was working with the project the last two weeks. I was making report on my master’s thesis. Almost completed work on it. Resume work on the project: now I will work on Matlab realization deformation of the surface.

Volume shifting with Kinfu
Thursday, May 24, 2012
../_images/srcs1.png

Hello all,

This week, we focus on accumulating the world as we shift our volume around. The next video shows a point cloud we generated with multiple shifts. The TSDF data that is shifted out of the cube is compressed before sending it to CPU; this decreases the required bandwidth when transmitting the data.

The saved vertices are only those close to the zero-crossings (the isosurface). The saved vertices include the TSDF value for later use (raycasting, marching cubes, reloading to GPU). In the video, the two-colored point cloud represents tsdf positive (pink) and negative (blue) values.

We are now implementing a class that will manage the observed world. Each time the volume will be shifted, new observations will be sent to the world manager which will update the known world and will allow quick access to some parts of it.

../_images/06.png

The following is a large point cloud generated and saved with our current implementation.

../_images/05.png

Francisco and Raphael

Laplacian mesh editing: MATLAB realization
Sunday, May 13, 2012
../_images/srcs2.png

Now I try implement algorithm for 3D mesh editing base on Laplacian matrix. I decide to do it in MATLAB for verify my idea. If it will work I will implement it in C++.

Volume shifting with Kinfu
Friday, May 11, 2012
../_images/srcs1.png

After a few weeks wrapping our minds around KinFu and CUDA, we have included the shifting functionality while scanning. Using the cyclic buffer technique (introduced in our previous post), we are able to shift the cube in the world.

Since we shift in slices, some information about the scene is kept in memory. this information is used to keep track of the camera pose even when we shifted the cube.

At this point, the data that is being ‘shifted out’ is currently lost, because we are clearing the TSDF volume slice to make space for the new information.

The next step is to extract the information from the TSDF volume before clearing it. This will allow us to compress it and save it to disk, or to a world model being saved to GPU/CPU memory.

We have some ideas on how to perform this compression and indexing, and we will explore them in the coming days.

At this point we are cleaning the code and adding useful comments. We want to push this to the trunk soon.

This video shows the shifting for a cube with volume size of 3 meters. The grid resolution is 512 voxels per axis.

This video shows the shifting for a cube with volume size of 1 meter. The grid resolution is 512 voxels per axis.

Francisco & Raphael

Reimplementing SSD
Monday, May 07, 2012
../_images/srcs.png

I am currently reimplementing SSD using PCL. It looks like I need additional datastructures defined over the octree, such as primal and dual graphs. I am investigating if the current octree version supports those graphs. Otherwise, I will implement these. As timing for this project, I am planning to deliver the software (and a report for the sponsors) by June 18.

Update
Friday, April 27, 2012
../_images/srcs1.png

It’s been some time since our last post, and we have been learning a whole lot about CUDA and Kinfu itself.

Two big points have been done so far:

# We analyzed the application from a complete perspective, to try to determine which are its strong points and its improvement areas. What we found when analyzing Kinfu looks somewhat like this figure:

../_images/03.png

The first impression is that Kinfu has clearly defined modules, which are reflected in the code distribution among the different files. However, the application requires that the functionalities and information shared between these modules is tightly coupled. It can be seen in several parts of the code that the parameters just keep cascading down the calls all the way from kinfu_app.cpp (highest level) to the GPU kernel calls (e.g. raycasting or integration operations). The main reason for this constant copying of parameters and long parameters list is precisely that the modules are separated from each other.

This might sound basic to experienced CUDA users, but we find it good to remind it there, as this was one of our main “aha moment”.

For example, if one was to declare a variable in internal.h - which is a header file included in all modules (via device.hpp) -, what you would obtain is a copy of this variable for every CUDA module, instead of getting only one of them accesible from all the modules. Once again, this is the result of having the modules compiled as independent.

After discussing with Anatoly, it has been defined that in the mid-term all the internal functionalities of Kinfu will be consolidated into a single TsdfVolume class which will contain all operations for integration, ICP, raycasting, and memory handling. This will result in a more readable code, avoid long parameter lists between operations, while keeping the real-time performance of the current implementation. In other words, at a higher level the code will be clearer and more concise, while at low level it will have the same performances.

# We have been working on the solution to tackle the scalability limitations in Kinfu. The expected behavior can be described as follows: Whenever the scene is being scanned, the user may approach the borders of the current cube. At that moment, part of the existing information about the cube must be compressed and stored to either GPU memory, Host RAM, or HDD to be saved. For now, the latter two are not in scope.

The cube that we are reconstructing is shifted, but it is partially overlapped with the previous cube. In other words, we will see a scene that is partially empty, but still contains part of the information of the previous cube. The origin of the TsdfVolume will be shifted as well, depending on where we approached the border of the initial scene. We believe that adding this overlapped information will help to estimate the pose of the camera once the new cube is loaded. This shift is handled via a circular 3D buffer implementation in the TsdfVolume class.

For clarity, we show this process in the figure below.

../_images/04.png

Francisco & Raphael

Laplacian matrix
Wednesday, April 25, 2012
../_images/srcs2.png

Last week, I wrote the program for calculation the Laplace matrix with uniform and cotangent weights (Described in the article Laplacian Mesh Optimization). Now I want to write a program for smoothing based on the Laplacian matrix.

Porting SSD
Sunday, April 22, 2012
../_images/srcs.png

On Monday (4/16), I had a short discussion with my mentor about how to improve surface reconstruction code in PCL. We started porting our SSD software based on the work SSD: Smooth Signed Distance Surface Reconstruction. The code hasn’t been pushed to the trunk yet. Once this is ready, it will provide a base for us to make further improvements. I am going to make another blog post soon to share our improvement plans.

Description of the algorithm
Monday, April 16, 2012
../_images/srcs2.png

The steps of the chosen algorithm:

../_images/algorithm.jpg

User-program interaction:

  1. User must choose the most appropriate template for the input point cloud.
  2. User must identifies and marks pairs of corresponding points on template and point data, defines a local frame for every marked point (Fig. 1 (a, b)).

The program:

  1. From the selected correspondences, we compute the initial deformation of the template. We compute Laplacian coordinates of the template and estimate local rotations for every pair of corresponding points.
  2. Make initial deformation (Fig.1 (c)).
  3. We estimate a global scaling factor for the template which is applied to the Laplacian coordinates, to consider the fact that template and input data may be (and generally are) scaled differently may distort the resulting shape in an unacceptable way . Make new deformation (Fig.1 (d)).
  4. Iterative process moves the template nearer towards the data points guided by local correspondences which are established from simple heuristics. This local matching is motivated by iterative closest point (ICP) algorithms for finding (rigid) transformations for shape registration (Fig.1 (e-g)).
  5. We improve the remaining regions of the deformed template for which no counterparts exist in the data (Fig.1 (h)).

The first phase of the realization described algorithm, I will write code to calculate the initial approximation and estimate the coordinates of Laplace

The chosen algorithm
Saturday, April 07, 2012
../_images/srcs2.png

This week in Wednesday we chose algorithm for implementation: Template Deformation for Point Cloud Fitting. I think that it will be first iteration for common algorithm: Template matching using 2D + 3D.

Now I’m researching algorithm and thinking about its implementation. I will have consultation with my mentor next week. After this I will describe here basic steps of implementation and will begin realization.

Texturing KinFu outputs
Thursday, April 05, 2012
../_images/srcs1.png

In parallel with the SRCS sprint, we have been working on texturing the meshes generated with KinFu.

The code has been pushed to trunk under surface/texture_mapping.

So far, textures are not blended and a face is attached to the first texture that sees it entirely.
In the future, we wish to come up with simple heuristics that will select textures more efficiently (like the closest and most-facing one) and/or blend them together.

A first result can be seen here:

In this video, 5 meshes have been textured an manually aligned to form the full room. Hopefully, the possibility to stitch volumes in KinFu (see our previous entry) will allow us to scan the room as one big mesh and skip the alignment process.

Raphael & Francisco

Volume stitching 101
Wednesday, April 04, 2012
../_images/srcs1.png

At this point, it is possible to detect when the sensor is reaching the border of the cube along the x-axis. Implementation for Y and Z still remains, but have to think on a smart/elegant way to determine when these boundaries have been passed.

Since there is a VOLUME_SIZE in internal.h, a VOLUME_SHIFT_THRESHOLD has been included as well. The latter represents the distance to the border of the cube that will trigger the volume shifting.

Volume shifting is toggled by pressing the s key while running Kinfu. It would be nice to have it as parameter in command line.

Thinking about the following steps, the question arises whether the camera pose is the only reference to the global coordinates, because then shifting the camera would make us lose any reference to the world whatsoever.

We got some interesting results by using the cube reset at the time of reaching the threshold. The link is at the bottom of this post.

For now we are saving the last pose before doing the shift. This functionality could be similar to a video we saw on youtube as well which also refers to volume stitching.

By stitching the volumes using the transform that is saved, the post-processing of the whole is could be possible. Although this is not yet ideal with respect to memory usage.

Francisco & Raphael

General proposed outline and some results on loading TSDF
Monday, April 02, 2012
../_images/srcs1.png

Last Friday we were discussing more with Anatoly on how to extend KinFu for large areas. After two weeks of code and solution exploration, we have set the broad strategy to go forward. In general terms, the goal is to implement as much functionality as possible within the GPU. This means that we will minimize the information exchange between GPu and CPU, since the PCIe bus is a well-known bottleneck. Three main steps have been identified as well:

  1. Implement demo that allows to travel within office without tracking failures. When Kinect goes out of volume, the volume changes its physical position to fit camera frustum again. So that camera tracking could continue without resetting. We agree that filtered out data is dropped at this stage [1].
  2. Implement a repacking procedure of the volume. When the volume is shifted, the information of the cube is extracted and compressed within GPU for later use. The new area must be initialized with some values (TBD).
  3. Develop smart heuristics to decided when the volume is shifting, and the consequences of such shift.

Therefore, the tasks starting from today are [1]:

  1. Familiarize with all KinFu sources.
  2. Implement integration/ray-casing that takes into consideration physical volume position.
  3. Implement volume content repacking to be consistent with 3D world after shift

Last week we were also experimenting with loading the TSDF from the filesystem. We created a small application to load data from a file to the GPU, perform raycasting and generate a screenshot.

The figure below shows the stored TSDF point cloud. This is already in KinFu trunk.

../_images/01.png

The result that we get is shown below. We used three camera pose (also stored in files). These are the results :

../_images/02.png

For now, this code will not be included in PCL because it needs some clean-up and matching to the coding standards. Furthermore it is part of the second step so it will be explored afterwards.

Francisco & Raphael

Footnotes

[1]Anatoly Baksheev, Minutes of meeting, March 30, 2012.
First blog entry
Thursday, March 22, 2012
../_images/srcs1.png

This is Raphael’s and Francisco’s first blog entry. We have had a discussion with Anatoly to introduce ourselves, as well as brainstorm on potential solutions to gets us closer to the goal. Friday the 30th we will have a more concrete discussion about the most promissing solution(s). It will also help to determine more concrete tasks, since for now we are in an exploratory stage.

Francisco & Raphael

My first step
Wednesday, March 21, 2012
../_images/srcs2.png

Hello everybody! This is my first blog entry as a participant of PCL-SRCS. I filled out information about myself. Now we refine the algorithm that will be implemented in the PCL-SRCS.

Filled in my information at PCL-SRCS blog
Wednesday, March 21, 2012
../_images/srcs.png

This is my first blog entry to test the blogging system.