Trimble is sponsoring development of several projects in different research areas involving 3D perception, as part of the second PCL code sprint.

For a complete list of all the present and past PCL code sprints please visit http://www.pointclouds.org/blog.

Click on any of the links below to find out more about our team of PCL developers that are participating in the sprint:

Filters points lying within the frustum of the camera. The frustum is defined by pose and field of view of the camera. The parameters to this method are horizontal FOV, vertical FOV, near plane distance and far plane distance. I have added this method in the filters module. The frustum and the filtered points are shown in the images below.

As a final blog post for this Trimble Code Sprint, I am attaching the final report I have written for the sponsors.

This filter removes the ghost points that appear on the edges. This is done by thresholding the dot product of the normal at a point with the point itself. Points that obey the thresholding criteria are retained. This completes the port of libpointmatcher to PCL. Now, I will be writing examples for all the techniques I added.

This filter recursively divides the data into grids until each grid contains a maximum of N points. Normals are computed on each grid. Points within each grid are sampled randomly and the computed normal is assigned to these points. This is a port from libpointmatcher.

I’ve added NURBS curve fitting to the example of surface fitting. The curve is fitted to the point-cloud in the parametric domain of the NURBS surface (left images). During triangulation only vertices inside the curve are treated, borderline vertices are clamped to the curve (right images).

I’m working on a smart clustering algorithm which will improve the segmentation accuracy of a cloud based on the region growing algorithm. In the meantime I’m finishing the SVN classifier and I’ll be ready to put it into PCL very soon.

In this period, I had to work also for some projects in my place; so I haven’t been spending too much time for the final report of the automated noise filtering plugin. I give me one week as a deadline to finish it.

I finished the preliminary report for ANF, which I will make available once Mattia is also finished, so that we can start to evaluate the work of this sprint with our mentors.

In the meanwhile I have been struggling with getting my PC ready for PCL developer use again. For the next few weeks I will be finishing some other projects for school and will also be working on my PCL to do list:

- Implement the MixedPixel class in PCL.
- Work on the filters module clean up.
- Finalize the LUM class implementation.

The functions of NURBS fitting are documented within the header files. I’ve also added a test and example file in examples/surface where you can test the algorithms and try out to fit some pcd files (e.g. test/bunny.pcd). The result should look like the image below.

Coming up next:

- Trimming of the bunny using the B-Spline curve fitting algorithm.

I’ve integrated all the NURBS fitting stuff (curve and surfaces) using PDM (point-distance-minimization), TDM (tangent-distance-minimization) and SDM (squared-distance-minimization). Therefore I’ve put openNURBS 5 into the repository as well. A good comparison of the fitting techniques PDM, TDM and SDM are described in “Fitting B-spline curves to point clouds by squared distance minimization” by W. Wang, H. Pottmann, and Y. Liu (http://www.geometrie.tuwien.ac.at/ig/sn/2006/wpl_curves_06/wpl_curves_06.html)

Coming up next:

- Consistent documentation and code cleaning.
- Examples for better understanding of the usage.
- Conversion of NURBS to polygon meshes.

I am working on a new filtering technique that divides the point cloud into boxes that have similar densities and approximates the box by the centroid. This will be completed soon. Parallely I am also working on the registration tutorial that I mentioned in the previous blog post.

Hi everybody. I have committed the code for min cut segmentation. At the moment only basic functionality can be used. I wanted to make a generalization for adding points that are known to be points of the background. Right now this option is turned off. It works fine but I just wanted to run more tests for this functionality.

So my next step will be to test this option. After that I will start to write tutorials for the code that I’ve added(RegionGrowing, RegionGrowingRGB, MinCutSegmentation). Exams are nearing so I will pay less time for working on the TRCS. But this is temporary. I hope that I will finish additional functionality, that I mentioned, before the beginning of the exams.

In the process of reporting for the sprint, Mattia and I have been working more in-depth on getting test results from the system and have been training a classifier. There have also been a few improvements to the system such as a new feature that should aid the distinguishment of leaves.

I have furthermore been learning up on function pointers, functors, boost::bind, boost::function and lambda functions. They will be useful for the filters module clean up.

The last weeks I’ve been also busy with some works in my town. I just finished my part of the “final noise removal system” and I aim to finish the report for in the next few days. Our work is finally returning promising results but we still need a feedback from our mentors to improve the solution.

During the last two weeks I have been working on the report for this sprint, which is becoming bigger and taking more time than I anticipated. I am also aiming to finish the filters module clean up which can be followed here: http://dev.pointclouds.org/issues/614.

Hi everybody. I have ran some tests and results are terrific. The algorithm works well even for very noisy point clouds. Here are some results:

There were some problems with the min cut algorithm. The best known is the algorithm proposed by Yuri Boykov, Olga Veksler and Ramin Zabih(based on their article “Fast Approximate Energy Minimization via Graph Cuts”). But it has some license constraints. So I used it only for testing at the beginning of the work. My code is using boykov_kolmogorov_max_flow algorithm from boost graph library. At the beginning I tried to use push_relabel_max_flow from BGL but it works a little bit strange. For the same unary and binary potentials(data and smooth costs) push_relabel_max_flow gives worse results then boykov_kolmogorov_max_flow does. So I’ve decided to give preference to the last one.

Right now I’m going to make some last changes in the code to fit the adopted rules.

Forgot to tell there are some problems with Online 3D point cloud viewer. I have already wrote about it to the authors. The problem appears only in Mozilla Firefox, it works fine with Google Chrome. So I hope everybody is able to see my point clouds.