Filters points lying within the frustum of the camera. The frustum is defined by pose and field of view of the camera. The parameters to this method are horizontal FOV, vertical FOV, near plane distance and far plane distance. I have added this method in the filters module. The frustum and the filtered points are shown in the images below.

As a final blog post for this Trimble Code Sprint, I am attaching the final report I have written for the sponsors.

This filter removes the ghost points that appear on the edges. This is done by thresholding the dot product of the normal at a point with the point itself. Points that obey the thresholding criteria are retained. This completes the port of libpointmatcher to PCL. Now, I will be writing examples for all the techniques I added.

This filter recursively divides the data into grids until each grid contains a maximum of N points. Normals are computed on each grid. Points within each grid are sampled randomly and the computed normal is assigned to these points. This is a port from libpointmatcher.

I’ve added NURBS curve fitting to the example of surface fitting. The curve is fitted to the point-cloud in the parametric domain of the NURBS surface (left images). During triangulation only vertices inside the curve are treated, borderline vertices are clamped to the curve (right images).

I’m working on a smart clustering algorithm which will improve the segmentation accuracy of a cloud based on the region growing algorithm. In the meantime I’m finishing the SVN classifier and I’ll be ready to put it into PCL very soon.

In this period, I had to work also for some projects in my place; so I haven’t been spending too much time for the final report of the automated noise filtering plugin. I give me one week as a deadline to finish it.

I finished the preliminary report for ANF, which I will make available once Mattia is also finished, so that we can start to evaluate the work of this sprint with our mentors.

In the meanwhile I have been struggling with getting my PC ready for PCL developer use again. For the next few weeks I will be finishing some other projects for school and will also be working on my PCL to do list:

- Implement the MixedPixel class in PCL.
- Work on the filters module clean up.
- Finalize the LUM class implementation.

The functions of NURBS fitting are documented within the header files. I’ve also added a test and example file in examples/surface where you can test the algorithms and try out to fit some pcd files (e.g. test/bunny.pcd). The result should look like the image below.

Coming up next:

- Trimming of the bunny using the B-Spline curve fitting algorithm.

I’ve integrated all the NURBS fitting stuff (curve and surfaces) using PDM (point-distance-minimization), TDM (tangent-distance-minimization) and SDM (squared-distance-minimization). Therefore I’ve put openNURBS 5 into the repository as well. A good comparison of the fitting techniques PDM, TDM and SDM are described in “Fitting B-spline curves to point clouds by squared distance minimization” by W. Wang, H. Pottmann, and Y. Liu (http://www.geometrie.tuwien.ac.at/ig/sn/2006/wpl_curves_06/wpl_curves_06.html)

Coming up next:

- Consistent documentation and code cleaning.
- Examples for better understanding of the usage.
- Conversion of NURBS to polygon meshes.

I am working on a new filtering technique that divides the point cloud into boxes that have similar densities and approximates the box by the centroid. This will be completed soon. Parallely I am also working on the registration tutorial that I mentioned in the previous blog post.

Hi everybody. I have committed the code for min cut segmentation. At the moment only basic functionality can be used. I wanted to make a generalization for adding points that are known to be points of the background. Right now this option is turned off. It works fine but I just wanted to run more tests for this functionality.

So my next step will be to test this option. After that I will start to write tutorials for the code that I’ve added(RegionGrowing, RegionGrowingRGB, MinCutSegmentation). Exams are nearing so I will pay less time for working on the TRCS. But this is temporary. I hope that I will finish additional functionality, that I mentioned, before the beginning of the exams.

In the process of reporting for the sprint, Mattia and I have been working more in-depth on getting test results from the system and have been training a classifier. There have also been a few improvements to the system such as a new feature that should aid the distinguishment of leaves.

I have furthermore been learning up on function pointers, functors, boost::bind, boost::function and lambda functions. They will be useful for the filters module clean up.

The last weeks I’ve been also busy with some works in my town. I just finished my part of the “final noise removal system” and I aim to finish the report for in the next few days. Our work is finally returning promising results but we still need a feedback from our mentors to improve the solution.

During the last two weeks I have been working on the report for this sprint, which is becoming bigger and taking more time than I anticipated. I am also aiming to finish the filters module clean up which can be followed here: http://dev.pointclouds.org/issues/614.

Hi everybody. I have ran some tests and results are terrific. The algorithm works well even for very noisy point clouds. Here are some results:

There were some problems with the min cut algorithm. The best known is the algorithm proposed by Yuri Boykov, Olga Veksler and Ramin Zabih(based on their article “Fast Approximate Energy Minimization via Graph Cuts”). But it has some license constraints. So I used it only for testing at the beginning of the work. My code is using boykov_kolmogorov_max_flow algorithm from boost graph library. At the beginning I tried to use push_relabel_max_flow from BGL but it works a little bit strange. For the same unary and binary potentials(data and smooth costs) push_relabel_max_flow gives worse results then boykov_kolmogorov_max_flow does. So I’ve decided to give preference to the last one.

Right now I’m going to make some last changes in the code to fit the adopted rules.

Forgot to tell there are some problems with Online 3D point cloud viewer. I have already wrote about it to the authors. The problem appears only in Mozilla Firefox, it works fine with Google Chrome. So I hope everybody is able to see my point clouds.

Back from vacation and getting into some of the more exciting stuff that is required to get our out-of-core viewer up and fully functional. There has been quite a bit of talk on implementation and this week I plan on adding three important features:

- Frustum Culling - Providing a way to query the octree for nodes on disk given the camera frustum. This will provide a good start in determining what data needs to be streamed in.
- Threading - All data processing needs to happen in a separate thread as to not block the main UI.
- Caching - Implement some type of Least Recently Used(LRU)/Least Frequently Used(LFU) cache to store streamed data and discard the less relevant.

I’ve spent most of my time recently learning and understanding VTK and getting the new mapper working with Radu to get this integrated into the pcd_viewer. The current viewer doesn’t handle large datasets and this integration will allow for much heavier data sets in realtime.

The new VTK classes are far from finished and will require quite a bit of work to handle all the data VTK can throw at it. I’ve sent Marcus, a core VTK developer, my work in progress in hopes to get some help on proper and stable integration with VTK. Here are some of the todo items still to be addressed.

vtkVertexBufferObject - Vertex Buffer Object Wrapper

- Needs to support both VTKs float and double data types
- Needs to support indices via GetVerts and *vtkIdType
- First implementation supports vertices, indices and colors. Need to add all of VTKs attributes, i.e. normals, textures, ...

vtkVertexBufferObjectMapper - Vertex Buffer Object Rendering Wrapper

- Need the ability to set vertex, fragment and geometry shader (Currently uses simple shaders)
- The current mapper has vertex, indices and colors VBOs. Need to support all of VTKs attributes, i.e. normals, textures, ...
- Determining whether to pull point/cell data and if colors are present on the data passed in.
- Handle all VTK data types to make sure calls to glVertexAttribPointer are correct and consistent.

Hi everybody. I have finally finished the work on the code for min cut segmentation. It took so long because of some bugs. The simplest mistakes are always hard to find. Any way. the code is ready and I even have some results of segmentation. The algorithm requires point clouds with cutted floor planes etc. So first of all I have deleted the floor plane. You can see the result in the point cloud viewer.

Right now I am trying to find the best unary and binary potentials(edge weights), because those that were mentioned in the article are not good enough. I hope that at the end of this week I wil be able to find them and upload the resulting code.

This method extends the existing outlier rejection method ‘CorrespondenceRejectionTrimmed’ based on the fraction of inliers. The optimal inlier ratio is computed internally instead of using a user specified value. The method is described in the paper ‘Outlier Robust ICP for Minimizaing Fractional RMSD’ by Jeff M. Philips et al. I am adding the test code in the test folder. This wraps up the porting of correspondence rejection methods from libpointmather to PCL. Next I am planning to add some filtering methods. Before that I will be adding tutorials on using the registration pipleline in PCL.

The last days I’ve been working on two methods I would like to put into PCL. The first regards the contour extraction of objects in a cloud, the second is the Support Vector Machine. Two issues have been threw at:

From tomorrow I’ll work at the system that I and Florentinus are setting up. I will have to work on a smart way to easily train a classifier.

I have been finishing up the segmentation steps of the system. The following is a typical result when used on the Trimble outdoor sets using the default parameters:

These results are downsampled from the original and the ground segmentation has already taken place. The clusters marked as red are not passed to the SVM classifier. They are either too small and will be marked as isolated and removed without the need for classification, or they are too large in which case they will be classified as background and are never removed.

I upgraded the system with a very basic over-segmentation detection system that performs quite alright. Furthermore, most my time was spent getting the parameterization right: Making all parameters depend on as few global parameters as possible and still allowing to greatly and intuitively vary the clustering needs. Since we are having a chat with the mentors soon, I will discuss these topics in the report that I will write for that chat, which is what I will be working on for the next few days.

As mentioned in my previous blog post I added a new correspondence rejection method based on the orientation of the normals. A threshold based on the dot product of the normals at corresponding points is used as a criteria for correspondence rejection. I am adding sample codes demonstrating the usage of the new classes in the test folder. It should be in the trunk after the code review. I will be adding a couple of new rejection methods over the next week to wrap up the port of correspondence rejection methods from libpointmatcher.

I have worked on improving the segmentation steps of the system. Initially I wanted to do a separate sub clustering after the euclidean clustering. However, the version I have now performs the advanced segmentation simultaneously with the euclidean clustering. In the future I might add this as a new class to PCL since it could be useful for any type of conditional region growing.

For our purposes this conditional region growing currently checks distance, intensity difference and curvature difference with respect to the candidate points, which results in better clustering than before. However, the balance between over- and under-segmentation is still as tricky as before. Ideally, the trees are clustered into separate leaves so that the SVM has a lot of different training candidates from one set. It is still quite impossible to get this clustering while not having too much clustering in the non-noise parts of the scene. I did manage to condense all the parameters for the segmentation step into one parameter, indicating clustering aggressiveness, so that a user could tweak this balance himself.

An idea I still want to investigate is to use a pyramid or staged system that will only further sub-segment under certain conditions. I think these conditions will need to be more complex than what I am using now (intensity, curvature and location). Although making them too complex could limit the applicability of the system.

PCA (principal component analysis) is a simple method to extract information from a set of data. Geometrically, its objective is to present the data from the reference axes that mainly highlights their structure.

The idea was to give a meaning to points depending on their positions and using the PCA. For each point I extracted the k-neighbors, calculated the PCA (from the centroid) and analyzed the eigenvalues. While the eigenvectors give the direction of the axes along which the data are extended, the eigenvalues are their length.

From a geometrical point of view, three eigenvalues of equal magnitude represents a uniform distribution of a solid; a limited distribution on two axes represents a plane, while a distribution skewed on a single eigenvalue represents a line. Carrying this theory on a cloud of points, we have that points in the proximity of the edges has one eigenvalue much larger than the other. For the inner points, in case of uniform distribution of the point density, the length of the eigenvalues will be better distributed along all the three eigenvectors.

Normalizing the three eigenvectors, we have that in correspondence of a contour or an angle of an object the largest eigenvalue is a value greater than 2/3 (about 66%) of the total propagation of the points. The results are shown below.

Yesterday, I sent to Jorge the first revision of the program that implements the pipeline designed for the removal of vegetation and ghost points. The program, designed by me and Florentinus, segments the image into large groups which are then classified.

The biggest problem encountered is related to the number of samples needed for optimal training of the classifier. Dividing the cloud into large groups, these samples are too few to make this happens. So we think of a more refined method for the recognition of the problems.

Since Florentinus will be busy this week, I will concentrate on implementing a filtering algorithm based on Principal Component Analysis. Throught this approach it’s possible the filtering of lines (1D), flat surfaces (2D) and solid objects (3D). More details will come up soon.

Hi, I’m still working on some paper for submittion next week and another one in three weeks. After that I’ll clean up the code and commit it to pcl.

I haven’t been posting for a while. In this week I didn’t get any new result apart working on the SVM learning machine implementation following the PCL standards. As Florentinus, I’m working at the same program for an automated filtering elaboration. I’ve been testing new features for the classifier like:

- PCA.
- Point density within a cluster.

After the feature extraction, my steps are twofold: a first training procedure for the classifier and a classification.

In the first case, since the generated clusters are pretty big, they are iteratively displayed one by one asking for an user feedback. The data are then used for a .model file which is generated and loaded for future predictions.

Among the improvements, i found the way to calculate a score percentage which indicates if a cluster is more likely to belong to a class instead of another.

Added new correspondence rejection method base on median distance of the correspondences

Sunday, April 08, 2012

I ported a new correspondence rejection method from libpointmatcher. This method operates on a new thresholding criteria. The median of the distances between corresponding points, times a factor is used as a threshold for rejecting correspondences. The user can set the desired factor. A discussion of this method can be found in the paper “Simultaneous Localization and Mapping with Active Stereo Vision. Diebel, J. and Reutersward, K. and Thrun, S. and Davis, J. and Gupta”. Next I will be adding a rejection method based on the normals at the corresponding points.

Finally! I have figured out how to use the push_ralabel_max_flow. Documentation of the BGL is not the best(ordinary users won’t understand how all this works). But I have managed to launch a few simple examples and it looks great.

So right now I will change the code a little so that it would work with the real point clouds. It won’t take much time so I hope that in a few days I will be able to post some results of the segmentation using PCL web viewer.

As we are implementing the ANF system, the pipeline has been slightly modified:

The ground and wall segmentation are now both performed in the planar segmentation block which can do this iteratively as long as large enough planes are being detected.

The object segmentation has been split into two blocks:

- Euclidean clustering will only use point location data to do segmentation and the parameters will be set to focus on not having over-segmentation. This will automatically make the step more prone to under-segmentation but this will be handled by the sub segmentation block. This step will also limit cluster sizes, automatically classifying isolated points.
- The sub segmentation block will further divide the clusters found by Euclidean clustering and will use additional information to divide them into exactly 1 object per cluster as good as it can.

Lastly, a feature estimation block is added that will be gathering the information needed to feed the SVM classifier. The sub segmentation block may be combined or interchanged with the feature estimator depending on what information will be needed to do the segmentation properly.

The main interface for this pipeline and all of the stages are implemented with the exception of the sub-segmenter. What remains is refining some stages and most importantly: keeping the “automatedness” of the system. Each stage has its own parameters that need to be set dynamically depending on other information of the cloud. At the moment this is working for most cases. For those that it doesn’t: I hope to build a feedback loop so that the system can rectify its own parameters. If that doesn’t work out I will need to translate it to an intuitive parameter that needs to be input by the user.

I have updated *my roadmap* incorporating the latest developments.

Mattia and I are now working on a full implementation of the ANF system we have been doing for this sprint. We are writing the code in the same format as the tools in PCL’s trunk/tools/ with the only difference being that we moved all stages of the pipeline to their own separate implementation file so that we could work on it simultaneously. The pipeline overview and the interface between the different stages is nearly finished. Tomorrow I will post a more elaborate blog update with an improved graphical overview of the pipeline and update of my roadmap.

I’ve committed a working version of my vtkVBOPolyDataMapper that the outofcore_viewer is now using. This is the working version from my previous post.

I started breaking out the VBO functionality into the separate class vtkVertexBufferObject that’ll be used by the mapper, vtkVBOPolyDataMapper. I’m running into a few issues I need to resolve before committing and getting feedback from Marcus at VTK and the PCL group. The interface for the new vtkVertexBufferObject should be quite simple and handle the basic VTK objects. Will post another update once I’ve got this all working.

Hi everybody. I have wrote the code for building the graph. Right now it uses triangulation algorithm from PCL. But I intend to write my own code for this later, because triangulation from PCL requires normals and their calculation can slow down the graph building. My variant will simply find K nearest points and add edges between initial point and its neighbours, as suggested in the article. Constructing the graph this way can cause existance of several connected components. But this problem can be solved easily by connecting points from different components with the edge.

Right now I am inspecting Boost Graph Library for the necessary algorithms. I have found several algorithms for finding the maximum flow in the graph, but I think that I will use push_relabel_max_flow algorithm because it is the fastest one. I have never used BGL algorithms for finding maximum flow, so right now I am trying to run some simple examples for small graphs.

One more important thing is that we have decided to generalize the algorithm. It will allow user to specify points that belong to backrgound/object. Not only one point that belongs to object as said in the base algorithm. The idea is based on the interactive application that was described in the artcile.

During the pre-filtering, me and want florentinus take advantage of the structured nature of the point cloud to make a pre-segmentation of the cloud. The latter is based on the search of adjacent pixels having similar intensity characteristics. The first step is therefore to generate an image as the one shown below:

In red the nan-points. The points are very scattered, but a pre-segmentation would immediately highlight the individual positions of the leaves. Among the algorithms studied, I took into account the GrabCut implemented in OpenCV (GrabCut is an image segmentation method based on graph cuts) and a Graph-Based Segmentation discussed in this paper.

The method implemented in opencv is very powerful and segments the image by dividing the background from the foreground. The result is not very useful in our case and, consequently, I haven’t investigated the use. The second method proved to be very powerful! It is based on the simple proximity of similar tonalities and a simple result is shown in this figure:

Different colors represent different segments. Besides highlighting many details, the algorithm is very fast and the resulting segmentation could have a great practical implication.

After a chat with Jorge, me and Florentinus decided to conclude a first “release” of our filtering pipeline by the end of this week. Therefore, I will leave aside the pre-filtering (which will take some time to be adapted to our requirements) and I will spend more time for the optimization of the steps that have already been tested.

I will be integrating some modules from libpointmacher (https://github.com/ethz-asl/libpointmatcher) to PCL. In the next couple of weeks new methods will be ported from libpointmatcher to the CorrespondenceRejection module in PCL. Documentation and results for the same will also be added.

Below I posted some results of the B-Spline curve fitting algorithm. I haven’t found any existing code that can handle point-clouds like those in the image. The reasons for this are heavy clutter and noise inside and at the boundary (c,d), strong turns and concavities and regions where the boundary is quite close to another one (a,b).

After finishing the work I’ll have a talk with Radu to discuss about implementation issues regarding NURBS, which will define how the curve fitting algorithms will be implemented.

Together with Florentinus, we concluded to divide the job for removing noisy points following a pipeline of this type.

(by courtesy of Florentinus)

The work done during these days was to improve the classification stage of the pipeline. This mainly consists in the search for features outlining the errors and in making flexible as possible the SVM learning machine. This required great efforts in terms of analysis of the codes and research between the methods already present in PCL for the extraction of features.

What has been most tested is the VFH (link) that provides in output 308 features for each cluster analyzed. Unfortunately, the results were realy bad. The goal for the future is to find features that describe the surroundings of a cluster together with the intrinsic properties.

In the coming days I will also work to make SVM (based on libsvm) compatible with the other libraries of PCL.

I am testing the new embedded pcd viewer and showing the results of the objects clustering I have been working on.

The viewpoint of this pcd viewer does not auto correct which is annoying. When pressing R in the embedded pcd viewer it will reset the viewpoint to the one described in the .pcd file instead of calculate an optimal one. This means that you manually need to open the .pcd file and modify the viewpoint there so that the initial view is already valid.

Edit: I spent 1.5 hours trying to do this... this quaternion system sucks big time. I will just ask Radu to update the embedded viewer :)

As for the actual work: I implemented a basic ground, wall and object segmenter. The ground and wall surfaces are removed using SAC segmentation with a plane model. These steps are really fast and leave one fourth of the points left for the remaining object clustering. There I apply an octree representation and a Euclidean clustering to get to the clustering as shown in the picture. The octree is more or less used like a voxel grid filtering at the moment and speeds up the euclidian clustering significantly. It also holds the density information which will be useful information to pass on to the classifier. For the benchmark data set there were no issues with objects being very close to one another. Next week I will improve the Euclidean clustering to also take varying point densities into account and be able to distinguish adjacent objects.

Recently, I came across a paper in which a statistical model was proposed to detect changes between two range scans. I am going to well understand and implement it.

Currently, PCL has the capability to handle geometrical and intensity differences detection. The general criterion is the Euclidean distance. I wish to improve this by adopting the model mentioned in this paper.

I am contacting the author about the mathematical derivations in the paper and will post the progress later.

Here is a graphical overview of how the pipeline of the ANF system may be turning out:

This pipeline is fine-tuned for larger scale environments such as the Trimble data sets. For smaller scale environments the ground and wall segmentation steps could be omitted.

The process from left to right can also be seen as an interactive information gathering process where the information up to that point is being used to refine the search for new information. This is useful for both the speed and accuracy of the final results.

- The ground segmentation step will use very basic analysis of the scene and should be a fast step, removing a lot of points so that the other steps are processing faster as well. This step is likely going to be based on iterative plane fitting.
- Similarly, the wall segmentation will also remove a lot of points, easing further processing steps. It will however be more difficult to distinguish walls from objects of interest so slightly more advanced analysis is required. This step is likely going to be based on a region growing algorithm only passing very large regions.
- The object segmentation step is going to determine how the remaining points in the point cloud are making up objects. An important challenge is that it needs to be able to detect when two objects are very near to one another, where a region growing algorithm would fail. The step will also be gathering more information like point density, normals and curvature to use in its analysis.
- For each of the segmented objects the classifier will determine what type of object it actually is. It will calculate features such as VFH and use almost all of the information already gathered. This step is already implemented by using a Support Vector Machine learning algorithm and is working quite accurately.
- The final step is very specific to the application. For our current purposes we just need to remove the trees and moving objects from the scene.

So the idea is to do the information gathering as late as possible in order to optimize speed (late in the pipeline means less points to apply the algorithm to). But don’t move it too late: earlier in the pipeline is better for accuracy.

For the next few days I will be focusing on the object segmentation step here. More specifically: I will investigate normalized cut clustering.

In the last few days I have been working to improve the implementation and performance of the machine learnng SVM. By providing a pre-labeled training set, I managed to get a performance rating of about 95%.

The trained classifier shows good performance as highlighted in the following screenshots:

The next days I will deal with the extraction of segments of the cloud in order to reduce the number of points on which to perform the classification.

Florentinus, meanwhile, is working on finding the best features to be extracted in a cluster to improve performance and minimize mistakes.

Soon I will test also a classifier k-NearestNeighbour based on a kDTree representation.

Hi, unfortunately I was ill last week and I’m struggeling a little bit with paper deadlines right now. I’ll post some results of the paper this week and will integrate the stuff right after the deadline.

The SVM classifier is implemented and ready to be used and tested in our automated noise filtering working chain. It is of course based on libsvm and i created classes for training,testing and use the classification algorithm.

An interesting chat with Florentinus, highlighted a new methods which is worth to be tested. They came up after reading this paper (more info in Florentinus’ blog).

In the next days i want to test the classifier for clusters recognition. Then I’ll start thinking on a pre-filtering process based on organized grid representations of Tribmle’s datasets.

After reading some more papers on segmentation, classification and recognition, Mattia and I had another talk on the ANF system. We are now investigating and adapting ideas from this paper, which has a lot of similarities with what we are trying to achieve. We are thinking of splitting up the work as follows: I will work on the “information gathering” and Mattia will work on the “information processing”. For instance, the machine learning based classification will be done by Mattia and I will work on providing the classifier with enough features to classify on.

The new information gathering that I will be doing will likely belong in the pre-processor that I was working on already. However, the information that is useful to extract often requires other information and again a step-like system would develop. For example, the Trimble data sets start out with x, y, z and intensity information. This can then be used to calculate normals and curvatures. With this, simple features can be computed. After that, more sophisticated features, etc.

I am currently investigating the features module of PCL more in-depth and am already finding a lot of useful things for this sprint. The eventual ANF system will probably turn out to use and interact with almost all modules in PCL :)

I’ve updated the pcl_outofcore_process runtime to dump numerous pcd files into an octree. There are currently a few issues with this tool that need to be resolved:

- numpts isn’t written to the json file
- Given that numpts isn’t written it’s hard to tell if all points specified within the pcd files are getting written to disk
- Should we store bounding box info within the header of a pcd file so an entire cloud doesn’t require parsing?
- Should we write a pcd reader that can iterate through the points so the entire cloud doesn’t have to be loaded into memory? This’ll be useful when loading up clouds in the millions+

I’ve committed the octree_viewer from my last post with very basic VTK functionality. This required some additional methods that already existed within the pcl_octree. Although, I’ve noticed that the voxels displayed aren’t square, which I think should be. I’ll have to look into this further.

On to the more exciting things!

I’ve been in talks with Radu and Marcus Hanwell, one of the developers on VTK on how we should move forward with our visualizations tools. The current version of VTK is based on OpenGL display lists, which is a very old technology deprecated in 3.0, removed in 3.1 with support via the ARB_compatibility extension and a compatibility profile introduced in 3.2. That being said, there are classes within VTK that use the newer technology, just not the pieces we’re interested in.

Why is this important? Well, because writing an out-of-core viewer doesn’t make much sense with display lists and requires a more dynamic implementation.

As VTK 6 evolves various newer OpenGL features will be integrated, vertex buffer objects (VBOs) among them. Until then, I plan on helping Marcus get these new features in faster by helping prototype and possibly develop the classes required. The good news is the VTK guys now have immediate testers with a simple test case, billions of points!

I’ve started to hack together a new VTK mapper vtkVBOPolyDataMapper to replace the vtkPolyDataMapper. I have a very barebones version working. It’s got a long way to go, makes quite a few assumptions and’ll need some love to work generically in the VTK framework. I’ll post more on this when I update the octree_viewer with the newer functionality.

Recently I’m working hard on understanding and best configuring the most famous machine learning algorithms. The purpose is to use Support Vector Machines to generate a weighting value for clusters in a cloud, then to group the ones with similarities and finally remove leaves and ghosts.

I’ve also thought to use Artificial Neural Networks and I started to implement the learning algorithm. But after discussing about it with Jorge and Federico, they addressed me toward more sofisticated and better performing approaches, exactly like SVMs.

Results will come up soon.

I have been discussing work on ANF with Mattia.
Mattia will be building the more sophisticated segmentation system based on machine learning.
I will be working on a more basic segmentation system and will focus on the interaction between different classes and levels in the hierarchy of the complete ANF system.
I have updated *my roadmap* incorporating the latest developments.

Meanwhile, the TreeSegmentation class I was working on has improved further:

It now also uses a very basic shape and size classification. Note that the results are actually in the form of a weighting to each point, the above screenshot depicts the points above a certain threshold for this weighting.

I am not comfortable with the current method I use though. I want to look at an octree implementation where I can zoom through different levels of resolution and also make use of density information. Hopefully this will provide more accurate and faster results for the shape and size classification.

Hi everybody. I have commited the code for Region Growing algorithm that uses the points color. I also made some refactoring for the base algorithm, so now it works faster. Right now I’m going to implement the segmentation algorithm based on the graph cut.

Fitting NURBS curve to real data in a generic way is quite tricky. I’ve developed an algorithm to fit NURBS curves in an error-adaptive manner, with respect to knot insertion and concavity filling. This means that at regions where the error (distance from the curve to the closest point) is high, control points are added by knot insertion and simultaneously the concavity of the control point is increased bending the NURBS curve inwards. Not modeling this concavity constraint would lead to bridging of narrow gaps.

Please have a look at the video “Curve fitting” on my homepage: http://users.acin.tuwien.ac.at/tmoerwald/?site=4#nurbs

Moving forward on my journey to noise removal, I’m facing the problem of point clusters identification for noise extraction. The problem is not easy at all, and I show in this blog post the frequency histograms meant to compare different features for the cluster identification. Moreover, I want to find range of values of these features with the purpose of marking a cluster with a *matching percentage score*.

The next step is the use of a good method to build a classifier. I really would like to implement **Artificial Neural Networks**. I know that it’s not the shortest and easiest way, but it’s probably the most powerful and gratifying.

*Frequency Histograms representing the intensity distribution:*

*Frequency Histograms representing the cardinality distribution:*

*Frequency Histograms representing the Standard Deviation distribution of normal vectors inside the clusters:*

*Frequency Histograms representing the Standard Deviation distribution of curvatures inside the clusters:*

*Frequency Histograms representing the distribution of the eigenvalues calculated from the covariance matrices of each cluster:*

According to the feedback from Gordon, I revised the code I wrote. I will add more functionalities to handle RGB point cloud, as well as spatial+intensity change simultaneously.

I wrote a very basic pre-processor for the ANF system. The idea is to gather commonly used data that the majority of the other steps of the ANF system will want to use anyway. For the Trimble data sets it currently only performs normal estimation and appends this information to the point clouds, resulting in PointXYZINormal type clouds.

At the moment I am still working on the TreeSegmentation class, which will use almost all of the fields of PointXYZINormal. The classification steps for intensity and curvature are already finished, what remains are the steps for shape and size classification of clusters of points.

I intended to create a suitable unit test for the LUM class, however, I stumbled upon particular cases where the LUM class fails to give proper results. I ended up spending the last two days searching for the cause but was unable to find it. In order to satisfy http://dev.pointclouds.org/issues/623, I will now make the LUM class instantiate through a .cpp file instead.

Tomorrow I will continue with the ANF project again.

We try to decrease the number of elements in a point cloud by collecting the points into groups called *clusters*. For this purpose it is very important the choice of a measuring distance that links points into a cluster.
The most common types are:

Euclideantakes into account the direction and magnitude of vectors:Squared Euclideanaccentuates the distance between entities and tends to give more weight to outliers:Manhattanis greater than Euclidean but it might appear less compact:

Once the method is decided, the strategy of classification can be *hierarchical* or *non-hierarchical*. A non-hierarchical method fixes the number of clusters a priori (like *K-Means* or *Self Organizing Map*). A hierarchical method starts from a number of clusters equal to the number of points and progressively reduces the number of clusters combining the calculated ones based on a “closeness” criteria.

Following these steps, the points are organized into single entities to which we want to attribute a physical meaning. The aim of the noise removal requires to distinguish ghost points and vegetation from the useful data. The idea is then to assign labels associated to a classification score. To do this we will train a classifier with a large amount of training datasets, and analyze some features like:

- cardinality;
- eigenvalue decomposition of the covariance matrix;
- intensity;
- curvature;
- normal variance.

Obviously, the cloud will be full of classification errors and mismatches. So, we will introduce a *linking policy* with which a leaf or a ghost cluster must be surrounded by clusters of the same type to be removed. This further analysis has the goal of making the method more robust and flexible. To do this we need to define the “distance” between clusters and different criteria like:

***Local Measures*****Minimum or Nearest-Neighbour Method**: the distance between two groups is the minimum distance between all pairs of points belonging to the first and second group. The criterion generates groups with the shape of “extended chains” of scattered elements.**Maximum or Furthest-Neighbour Method**: the measure of distance between two groups is the maximum distance between all pairs of points belonging to the first and second group. The criterion generates compact groups with very close elements.

***Global Measures*****Within Groups Clustering Method**: it considers the average of all the distances between pairs of points in the first and second group.**Centroid**: the distance between two groups is determined by the distance of their centers of gravity.**Ward**: the clusters are aggregated so that the variance increment in the new group is the minimum possible (every time you add a connection the variance increases; we want to minimize this increase).

One of the geometric opperations on NURBS is to trim them. That is to cut out holes or define a boundary which has a different polynomial degree then the NURBS itself. Then, trimming is nothing else but defining regions on the NURBS where it is visible or invisible. In our case we need it for defining the boundary of arbitrary surfaces and furthermore for intersecting NURBS of different polynomial degree.

To apply such boundaries to real data we want to fit Nurbs curves to a point-cloud boundary. In the image below you see some test case of a boundary with some outliers in the middle (think of typical kinect data). The curve is initialised with PCA with a radius equal to the first eigenvalue (iteration 0), and then successively refined and fitted. The fitting weight of the points (blue) is influenced by the distance (green) to the curve (red) using the gaussian function so that close points lead to high influence and vice verca.

To account for higher frequence in the measurement, the degree of freedom of the curve is increased by knot insertion (iteration 1 and 2). After a few iterations the curve approximates data quite nice (iteration 5).

The radius of the fitted curve is 0.5 with an overlayed sinusoidal of peak amplitude 0.05.

Coming up next: Fitting of NURBS curves in the parametric domain of NURBS surfaces and trimming (on real data).

We are currently focusing on “binary noise”, i.e. noise that is defined by the (binary) existence of points. For this type of noise, the challenges in Automated Noise Filtering can completely boil down to challenges in Automated Segmentation; If the AS is performing ideally, the only further step to get to ANF is to apply the ExtractIndices filter in PCL. Hence I have added my latest work to the segmentation module in PCL http://docs.pointclouds.org/trunk/group__segmentation.html.

Currently there are the base class AutomatedSegmentation and one derived class AutomatedTreeSegmentation. Each derived class from AutomatedSegmentation is going to represent one step in the total system and focus on one particular type of entity to segment from the scene. These different steps can then be used in succession to get to a complete ANF system. However, I aim to build these classes so that they can interact with one another in more complex ways (like an iterative design or combination with registration -> SRAM). More information on the classes can be found in their documentation as soon as docs.pointclouds updates.

Also, each of these classes/steps is built up from different substeps. For instance, The AutomatedTreeSegmentation performs intensity classification and curvature classification as substeps. I am still thinking if it could be interesting to separate these substeps into their own classes somehow. For these substeps it also holds that they may need to interact more complexly than just successive application.

I am hoping to converse with Mattia or other people who are interested to see if this is the most interesting/useful implementation of an ANF system. If you are reading this and have suggestions or feedback (both positive and negative) about this, don’t hesitate to drop me a line: f.j.florentinus@student.tue.nl.

Meanwhile I will continue working on the implementation of AutomatedTreeSegmentation since is it not finished. I will also spend time on http://dev.pointclouds.org/issues/614 and other PCL related clean up things.

I’ve made some experiments with cylindrical NURBS and NURBS patches on real data. I’ve uploaded two videos to my homepage: http://users.acin.tuwien.ac.at/tmoerwald/?site=4.

I continue working on http://dev.pointclouds.org/issues/630.

I am still using Gordon’s data, but at this time I did more trimming and converting work. I added one more data member to pcl::SegmentDifferences which controls the threshold that the users could specify when looking at the intensity values.

By far, the new code has been working fine.

Reference

Target

Result

I’ve started to wrap my mind around VTK and the PCL Visualizer. I wrote an application similar to the octree_viewer using straight VTK. The following is a processed outofcore cloud with 4 LODs.

The grand idea of the ANF system as discussed during last week is going to take a while to fully take shape. Following Jorge’s suggestion, I am going to focus on the subproblem of vegetation first (trees in particular) and implement an easy system. The system will already be a multi-step system where the first two steps are:

1 Intensity classification

Trees generally have a low intensity rating compared to most flat surface entities. Hopefully this step does not need any input parameters, i.e. all laser scanner type data sets have this property (TODO). This step is performed first for it is a very fast step and will leave a smaller exploration space for the other steps.

2 Curvature classification

Leaves generally have a high curvature. This step will likely also need to analyze the size of the areas that have high curvatures and maybe their shape too. It could be useful to implement the size/shape detection as a separate step in ANF so other classifiers can also make use of this (TODO). This step has some parameters that need to be set but it is likely possible that in the end this can be done fully automatically.

An interesting global input parameter / cost function would be to set the amount of trees that are in the point cloud. This is hopefully easily determined by a user and it allows for an iterative design where the ANF system could iterate and re-adjust its internal parameters in order to get to the appropriate answer.

The system will also start using the weighting system where each step applies weighting to the points for how likely that point is part of a tree or not.

I have also been working on http://dev.pointclouds.org/issues/614 since my last blog post.

Hi everybody. I have tested the Region Growing algorithm that uses points color. So here are the results.

There are some more pictures in my folder in higher resolution. You can find them in “trcsweb\source\velizhev\images\COLOR”

After talking with Radu, I decided to use Very Sleepy to profile the performance of pcl::SegmentDifferences. The computing time was extremely large to Trimble data. I could not wait till the program stopped before it used up the memory.

The basic statitics shows:

Filename: D:PCL_TRCSpcl_sshbinTRCS_test_debug.exe

Duration: 31549.642000s

Date: Sat Mar 03 23:29:50 2012

Samples: 3619112

I wish I could have made a figure but I haven’t found a tool to convert the result to a reasonable graph. The output .csv file was not well generated. So, here I just show the screen shot from which you could clearly see which parts are the most time consuming parts.

After the use of a Region Growing clustering process based on the Euclidean distance, I show a good result which will definitelly be good for the recognition of leaves on trees. In the image below, different colors mean different clusters. Next step is the use of a classifier to distinguish good and noisy clusters.

UPDATE: The inversion of points for NURBS cylinders is now fixed and takes the same time as for NURBS patches

This week I implemented fitting of periodical NURBS, namely cylinders. Unfortunately the inversion of points (i.e. finding the closest distance from a point to the NURBS surface) takes (for some reason I didn’t find yet) much longer than for NURBS patches. In the image below you can see some data points in blue. The NURBS cylinder is initialized using PCA. Then for each point the distance to the closest point on the NURBS surface is calculated (red lines), and the linear equation for minimizing this distance is assembled and solved. As usual a regularisation term models ‘stiffness’ of the NURBS (magenta).

From the last chat meeting had with Jorge, Radu and Federico (see Florentinus’ blog) I got to experiment with new ideas.

The proposed filtering step is based on the calculation of the covariance matrix of points coordinates, in the neighborhood of a sphere of radius R. Using an EVD (Eigen Value Decomposition), the filtering policy is based on:

where is an user defined constant (in my case 0.04). All the points respecting the previous constraint are deleted from the cloud.

The second step of filtering uses a simple RadiusOutlierRemoval filter iterated twice. The results are shown in figure:

The method on the global cloud reported minimal loss of non-noisy points and high processing time (just over an hour).

This solution is therefore an excellent candidate for the removal of vegetations. In the next study I will try to segment the image to apply the filter only in the areas which are marked as “vegetation”. Hopefully, this will minimize the loss of details not meant to be filtered. Once I get good results I’ll optimize the algorithm to reduce the computation time.

Today I had a chat with Alexander and Mattia on segmentation. Alexander explained his work on Non-Associative Markov Networks for 3D Point Cloud Classification: http://graphics.cs.msu.ru/en/node/546. This type of classification could be very useful for our ANF system since we are working with quite specific noise types. These noise types would probably be adequately distinguishable through the machine learning that is used in this classification method. Alexander adds that the current implementation uses OpenCV, which would add a new dependency if it was implemented into PCL as such.

While I am gathering information like this and thinking of how to combine it into one big ANF system, I will also be working on the following roadmap entries for the next couple of days:

- Bugfix, patch and enhance the PCL filters module.
- Make all filters that are about point removal derive from FilterIndices instead of Filter.
- Move the getRemovedIndices() system from Filter to FilterIndices.
- Implement the getRemovedIndices() system for all the derived filters.
- Implement the MixedPixel class in PCL.

Last Monday I had a very useful meeting with Jorge, Radu, Federico and Mattia about the TRCS and ANF. A quick summary of the conversation:

- The system setup for TRCS-ANF is more clear now:
- The automated noise filtering system will perform analysis on the scene and use the results of the analysis to determine which filtering to apply and which parameters to use.
- The system could have any number of these analysis and filtering steps, where each step has a particular focus. Steps should be minimized for the removal/alteration of non-noise points, which could limit the filter’s ability in tackling noise. Hence the idea of having multiple steps: widen the overall applicability of the system.
- Each step would have at most one settable parameter, like a slider that ranges from 0 to 1, indicating the “aggressiveness” of that step.
- Ideally the number of sliders of the ANF system would approach zero. This would likely only happen if an adequate cost function can be devised. The cost function could also be used to enhance the performance of some of the steps in the sytem by allowing an iterative design.
- The system can still be built in various ways, largely depending on what types of noise are in need to be tackled. For now we will focus on the noise of the Trimble data sets, namely: vegetation and moving objects noise.

- Brainstorm on the analysis type steps:
- Use segmentation to distinguish between different entities (both noise and non-noise). For instance:
- Use parametric modeling to segment the ground.
- Region growing for point clusters not connected to anything else, useful for moving objects noise.
- Investigate MRF segmentation: http://vision.deis.unibo.it/fede/3Dsegm.html
- Investigate region growing with smoothing constraints: http://www.pointclouds.org/blog/trcs/velizhev/index.php

- Use properties of points and determine new properties based on surrounding points:
- Make use of the intensity values of points in the Trimble data sets for distinguishment.
- Compute normals, determine curvatures, useful for detecting trees.
- Investigate AMN classification: http://graphics.cs.msu.ru/en/science/research/3dpoint/classification

- Apply a very fast, simple conditional analysis that passes points that are definitely not noise or filters points that definitely are noise. The first steps of the system should be fast and simple like this. As the subset of unclassified points decreases, the complexity of the steps increases.
- Instead of binary removal of points, apply weights to points, describing the certainty that the point is noise. Different steps in the pipeline alter this weight accordingly. At the end of the (sub-)pipeline apply hysteresis and/or use one of the main system sliders to determine the actual removal.
- If possible: use change detection to further analyze the scene. Most interesting option: combine this with the registration process of the total system using an iterative design. Allows to link the intermediate results of noise weighting across different point clouds. Also ensures a symbiotic relation between registration and filtering: SRAM (Simultaneous Registration And Modeling).

- Use segmentation to distinguish between different entities (both noise and non-noise). For instance:
- Brainstorm on the filtering type steps:
- The 3D bilateral filter and integral images in 2D and 3D would make powerful additions to PCL. Furhter investigation is needed however to determine their effectiveness on the Trimble data sets and this particular form of ANF.
- The mixed pixel implementation can be easily implemented in PCL. It is also useful for this ANF since it focuses on shadow point removal. Link: https://code.ros.org/svn/ros-pkg/stacks/laser_pipeline/trunk/laser_filters/include/laser_filters/scan_shadows_filter.h

Implementing all of the ideas above could become a huge project.
Tomorrow I will discuss with Mattia how to properly split up the system into subsystems and determine priorities for each of the subsystems.
The results from that conversation can be found on *my roadmap*.

I have opened up a new issue on http://dev.pointclouds.org/issues/630 and still working on it.

The following diagrams the general framework PCL received from Urban Robotics for use in the out-of-core project. The diagram is broken up into three parts:

Creation

- Points of type PointT are added to the octree_base data structure. This data structure is in charge of managing the underlying child nodes and subsequently divided data.
- As the points are subdivided, octree_base_nodes are created containing a random subsample or LOD of the points that are contained within each node (branch). These nodes are in charge of managing bounding box and meta data on disk and hold payload data read from and written to disk, but doesn’t handle the lower level read/writes.
- Once a max depth or leaf node is reached a container type is created to manage disk or ram access. These are currently the only types of containers available within the framework.
- The disk containers handle the low disk I/O

Directory Structure

- At the top level of the directory structure lives a .octree file containing the octree depth or LOD, number of points at each LOD and various other bits of meta data. This maps to the octree_base.
- Each directory from the top level root directory maps to an octree_base_node. Each node directory contains a .oct_idx file providing a nodes bounding box and LOD data. Leaf nodes have no children (child directories) and are found at the max depth of the tree providing access to the original payload data (Not a subsample).

Query

- When reading or querying the tree the octree_base provides an interface to the underlying data structure.
- Querying the tree is accomplished by providing a bounding box that intersects with the underlying octree_base_nodes. These octree_base_nodes provide access to the point data via containers or filepaths containing the binary point data.

Stephen and I have been documenting and refactoring the underlying the code and are at a point where we can start investigating some of the more interesting features to be implemented.

In addition I’ve started to commit tools that’ll be useful in the processing of pcd files for use in the framework.

Hi everybody. I’ve made it. Not without the help from Radu, but I finally committed the code. I found out how those gTests work and wrote some for my class. I also solved the problem with line endings(MSVC was using CR+LF instead of LF).

There was one more interesting thing about the code that I wrote. I am using vector of lists to store indices of segmented points. It looks like std::vector<std::list<int>>. I was very surprised when Radu told me that there occured some errors during the compilation, because I have manually assembled the library on my PC. The cause of it was the missing space. GCC wasn’t able to compile the std::vector<std::list<int>>. “>>” cannot be compiled on GCC.

Right now I’m going to prepare the the second variant of the RegionGrowing algorithm taking into account all those difficulties that I met on my way. I think it would be easier and much faster because now I have more experience.

I’ve submitted the templated version of NURBS fitting to pcl trunk and tested it with a small program.

For the last couple of days I have been working on http://dev.pointclouds.org/issues/614. I have expanded the FilterIndices base class with the following new systems:

- FilterNegative
- RemovedIndices
- KeepOrganized
- Filtering for NaN and Inf

The latter is not actually part of the base class, the derived classes may want to implement this if they so choose. The reason for that would be to give meaning to the difference between FilterNegative and RemovedIndices. FilterNegative only inverts the conditions of point removal for the real points. RemovedIndices also keeps track of the points removed because of NaN or Inf.

For the next couple of days I will upgrade the filters that can use these new systems to do so.

I got back on track after my graduation holidays. I’m currently studying the Integral Images approach and how to use the technique for the point cloud. Nice results are expected from Tuesday.

For every point in the source cloud, the normal and K nearest points in the target cloud are computed. Among these K points, the point that has the least distance to the normal (point to line distance in 3d http://mathworld.wolfram.com/Point-LineDistance3-Dimensional.html) is considered as the corresponding point. Result for a test dataset created is shown below. Two parallel planes differing in their y co-ordinates are created. Correspondences are shown by connecting lines. Implementation files and test program are available in the trunk.

Points on the planes shown in white dots. Correspondence shown by red lines.

The most eligible PCL filters for the specific noise removal in the Trimble data sets have already been discussed in previous blog posts by me and Mattia. The filters described in this blog post are not really suitable to be independently tested on the Trimble data sets, but a quick summary would be useful. While I was analyzing these filters I stumbled upon minor bugs, API inconsistencies and typos. The following analysis will summarize functionality, time complexity and possible improvements.

1 Filter

Function: Base class for almost all filters. Inherits from PCLBase, which manages a point cloud.Runtime: N/ANotes: Currently manages removed_indices, which is only useful for filters that are about point removal, and only a few of those filters actually use this functionality. Better to move this functionality to the FilterIndices base class.

2 FilterIndices

Function: Base class for filters that are about point removal. Inherits from Filter; the added functionality is being able to forward the indices of filtered points instead of points themselves.Runtime: N/ANotes: Some filters still inherit from Filter that could easily be upgraded to inherit from this class.

3 Clipper3D

Function: Base class for BoxClipper3D and PlaneClipper3D.Runtime: N/ANotes: Not officially part of any API module.

4 ExtractIndices

Function: Extracts a set of indices from a point cloud as a separate point cloud.Runtime: , iterates through all indices once, not performing any real computations.Notes: Uses setNegative instead of the inherited getRemovedIndices for inversion purposes. May be upgraded to inherit from FilterIndices instead of Filter.

5 PassThrough

Function: Pass certain elements of a point cloud based on constraints for one particular dimension. Can act on any dimension of any PointT, not just spatial dimensions. Only acts on one dimension at a time though.Runtime: , iterates through all indices once, not performing any real computations.Notes: Has setFilterLimitsNegative and getRemovedIndices for inversion purposes. May be upgraded to inherit from FilterIndices instead of Filter.

6 CropBox

Function: Pass certain elements of a point cloud based on contraints for their spatial dimensions. The constraint area is always a box but can be scaled, rotated and translated to any extent.Runtime: , iterates through all indices once, performing minor computations.Notes: Does not use the inherited getRemovedIndices method and has no inversion system.

7 CropHull

Function: Pass certain elements of a point cloud based on contraints for their spatial dimensions. The constraint area is defined by a polygon structure.Runtime: , iterates through all indices and through all polygon points, performing medium computations.Notes: Uses setCropOutside instead of the inherited getRemovedIndices for inversion purposes.

8 PlaneClipper3D

Function: Check points on contraints for their spatial dimensions. The constraint area is a half-space, defined by a plane in 3D. Can be used on separate points, lines, planar polygons and point clouds.Runtime: , iterates through all indices once, performing minor computations.Notes: Not part of any API module.

9 BoxClipper3D

Function: Check points on contraints for their spatial dimensions. The constraint area is always a box but can be scaled, rotated and translated to any extent. Can be used on separate points as well as point clouds.Runtime: , iterates through all indices once, performing minor computations.Notes: Not part of any API module. Two virtual methods are not implemented. The point cloud implementation is almost identical to the CropBox functionality.

10 ProjectInliers

Function: Project certain elements of a point cloud to a predefined model. The possible models are defined in the sample_consensus module. Only moves points, does not remove points.Runtime: , iterates through all indices once, performing medium computations.

11 RandomSample

Function: Downsamples a point cloud with uniform random sampling.Runtime: , iterates through all indices once, performing minimal computations.Notes: Does not use the inherited getRemovedIndices method. Does not use the inherited setIndices method.

For the next few days I will be tackling some of the typos and minor issues and will be adding the “bigger” issues on dev.pointclouds.

Hi everybody. I have commited the source code of the RegionGrowing algorithm. But right now it is disabled, because it needs to be fixed a little. But everyone who is interested can look at it.

I wrote one more variant of this algorithm. It takes the color of the points into account. It also has a good approach for controlling under- and over- segmentation. The idea is very simple. If the segment has less points than the user wants then the algorithm finds the nearest neighbouring segment and merges them together. I want to test it a few more times. The detailed description can be found in the article “Color-based segmentation of point clouds” by Qingming Zhana, Yubin Liangb, Yinghui Xiaoa.

One more interesting thing about the commited code. During testing I found it too slow. I was very surprised when my profiler said that the problem is in std::vector<bool>. I didn’t knew that it packs booleans one per bit, causing loss of speed when accessing elements. Anyway, I solved this problem by simply changing the value type.

My next step is to write some unit tests for the algorithm and make a tutorial.

The last week I worked on global optimization of objects with several fitted NURBS surfaces. As mentioned in earlier posts there are several problems when fitting several NURBS to C^1 continuous regions of an object, like overlaping, gaps and other intersection and alignment combinations.

Until now I was fitting C^1 continuous regions sequential in a linear equation. The key idea of global optimization of NURBS is to assemble all NURBS fitting equation of one object into one system of linear equations. This allows to define relationships between NURBS like the closing boundary constraint. This one basically defines that a point on one NURBS lies on a point on another NURBS. This is especially visible in the ‘Boxes’ videos available at http://users.acin.tuwien.ac.at/tmoerwald/?site=4.

The points for closing the boundary between NURBS are also used for trimming them. Since those points are by definition the outline of the sub-point-cloud of the C^1 continuous region they are simply added to the triangulation algorithm.

For Triangulation the established Delaunay Triangulation is used (http://www.sintef.no/Projectweb/Geometry-Toolkits/TTL/). The textures are captured from the RGB image by simply projecting them into the camera plane. This causes the problem that the texture of occluded area is the same as the one of the occluder. To solve this problem I want to implement a z-buffer to check which surface is the nearest.

Coming up next:

- Multiple views for complete models.

The current bilateral filter in PCL acts only on the intensity values of the points in a point cloud and is therefore not interesting for the desired noise removal in the Trimble data sets. However, analysis of this implementation will help in understanding a new implementation and gives an indication for the time complexity that can be expected.

The current version has complexity where is the number of points to iterate through and the average number of neighbors found for each point.

For each point pair it will calculate weights based on distance and intensity difference using gaussian kernels. It uses this information to only alter the intensity values of the points. For more information: C. Tomasi and R. Manduchi. Bilateral Filtering for Gray and Color Images. In Proceedings of the IEEE International Conference on Computer Vision, 1998.

The neighbors are found by a radius search (currently only tested with kdtree search) where the radius is . The searching is the most time consuming part of the algorithm and the parameter greatly determines runtime.

Time (s) | |
---|---|

1 | 21 |

5 | 68 |

10 | 203 |

15 | 426 |

25 | 1116 |

Increasing also increases the area of effect of the smoothing as can be seen in the following pictures:

original, = 5:

= 15, = 25:

The above results all have a of 1000. Reducing this number reduces the kernel effect and gives seemingly similar results as reducing . The value of has no effect on the computation time.

This bilateral filter could already be considered useful for the Trimble data sets since the intensity does have some noise in it. With = 15 the noise is significantly removed whilst not reducing detail on edges. The runtime is however very long. Hopefully a new searching algorithm would significantly reduce this. Furthermore it can be noted that the code can easily exploit computing parallelism since there is no conditional branching and few dependencies.

In conclusion: The filter is very powerful and elegant, requiring few input parameters that are also easily understood. The upgrade to a spatial bilateral filter for 3D will likely be worhtwhile, although the drawback (for now) will be its runtime.

Among all algorithms the filter StatisticalOutlierRemoval is definitely the best, although this has many faults such as the elimination of good portions of the cloud. Thanks to the suggestions of Jorge and Federico, I spent some time considering optimizations and finding out how far we can improve the algorithm.

The studied subjects are two:

- Optimizing the search for adjacent points with the study of integral images.
- Optimizing the search of noisy points.

The last three days I was focused on the second point. First, it is important to consider how the StatisticalOutlierRemoval works:

- For all the points of the cloud, it is calculated the average Euclidean distance of the target point with respect to a set of N neighboring points.
- Then, it is estimated variance and standard deviation of all the mean points resulting in a bell-shaped distribution similar to the Gaussian one.
- It iterates again to the cloud points and deletes all those which fall outside a certain variance range.

A major drawback is that this algorithm does not take into account the direction of the calculated distances. A good idea is to change the previous filter introducing the covariance matrix and the eigenvalue decomposition:

- For all cloud points it is calculated the vector of the mean value distances () within a fixed radius R.
- It calculates the covariance matrix using measurements (along axis x, y and z) and the expected value .
- From the covariance matrix, the eigenvalue decomposition is performed and the smaller eigenvalue is taken into account (NB: The eigenvalue with the largest value corresponds to the axis that has the greatest variance; it will therefore be the variance of the main component 1. Then, the other eigenvalues are the variances along the other “weak” directions. A large eigenvalue indicates that the query point is slightly connected to other points along a direction, and vice versa for a small eigenvalue. Taking the smallest eigenvalue we can assess the best “connection” level with the rest of the point cloud).
Varianceandstandard deviationare extracted from the eigenvalues, resulting in a bell-shaped distribution similar to the Gaussian one.- The algorithm iterates again to the cloud points and deletes all those which fall outside a certain variance range.

From a theoretical side, the changes should remove only the points that are weakly connected to the rest of the cloud along the three directions x, y and z. After various tests, and despite the validity of the theoretical functioning, the attached picture shows that real results are not really promising as there are removed points almost exclusively from the “good” part of the cloud.

The ghost points are still present and the points are particularly affected in the corners (see the building facade and the car).

It is approaching my final week in this quarter. I am in a little bit more pressure. Anyway I am planning to play with more data from trimble and try to gain some ideas from the noise removal project.

Today I finished the quantitative filter analysis benchmark algorithm, which takes the resulting cloud of a filter, compares it against the target benchmark cloud using octree and returns the following result:

The first term ranges from 0 to 1, denoting the amount of noise remaining, i.e. how good the filter is at removing the noise we want it to remove. The second term increases if additional noise is being generated because of the filter. The third term increases if non-noise/desired points are being removed because of the filter. If the resulting sum of these terms becomes equal or greater than 1, the filter is deemed useless. Because of this interrelationship the last two terms are scaled with a percentage of the total desired points in the cloud. This value (currently 10%) may still be changed after further analysis. Also, the checking for removed points is currently not taking point location into account, for instance: If the desired points were to be uniformly downsampled to 90% of the original, the error would be 1 although the actual result would not be that useless.

For the next few days I will be finishing up on the PCL filter testing using these new metrics.

Today I finished the benchmark target point cloud that has its noise manually removed. After discussing with Jorge, it was decided that we are currently only focussing on removal of points and not smoothing of points.

Next I will be working on the comparison algorithm that will return a number describing the success of noise removal. I will level will Shaohui since this is a particular case of change detection.

I have prepared a detailed report for the analysis of the computational speed of the filtering methods: RadiusOutlierRemoval and StatisticalOutlierRemoval. To do that, I performed the tests on two point clouds: *A* of 3.7Mln points and * B * of 24.4Mln points. The laptop I used has Linux Kubuntu 11.04, Intel Core Duo P8600 2.4GHz and 4GB RAM.

**RadiusOutlierRemoval**

A for loop iterates through all the N elements of the cloud (complexity O(N)). Within the same cycle, all points within a radius r are found to check if the wuery point has a sufficient number of neighbors. Assuming that the search algorithm of the points within a radius r (‘RadiusSearch’) is brute-force (O(N) for unordered point clouds), the computational complexity of this method is O(N·N) .

The table has been constructed by varying the searching radius of the points, and the results are expressed in seconds:

Ray (mm) | Cloud A (sec) | Cloud B (sec) |
---|---|---|

10 | 12s | 24s |

20 | 24s | 43s |

50 | 93s | 151s |

100 | 320s | 530s |

200 | 1569s | 2200s |

By increasing the searching radius, the computation time grows very fast. This means that the search algorithm totally affects the filtering speed of this methodology.

**StatisticalOutlierRemoval**

For this filter, a for loop iterates through all the elements of the point cloud (O (N)). Then, still within the same cycle, the method ‘nearestKSearch’ searches for the closest *meanK* points to the query point (O(N·meanK)). Afterthat the average distance is calculated to obtain the Gaussian distribution (O(N)). Finally, the filtering end up taking into consideration the variance analysis (O (N)). Thus, the computational complexity is approximately: O(N (N·meanK) + N + N).

Num of neigh | Cloud A (sec) | Cloud B (sec) |
---|---|---|

10 | 12s | 25s |

20 | 17s | 33s |

50 | 33s | 65s |

100 | 186s | 134s |

200 | 1600s | 406s |

**Conclusion**

The above results show that the search algorithms are the most time consuming part of the classes. Therefore it’s very important to develop a spherical ordered search algorithm in order to optimize any type of filter operation that requires the searching of surrounding points: ‘SphericalOrganizedNeighbor’.

I’ve started to dig through Urbans code a bit more, refactoring where possible. The code is now broken out into a few more manageable pieces. I’ve also started commenting and documenting the portions I’ve walked through.

Radu and I welcomed Stephen Fox on today who is working on the URCS code sprint. Stephens been brought up to speed and we’ll now have two minds diving into the world of out-of-core visualization.

I committed to the trunk a pcl_outofcore module. It’s unclear to me at the moment if this will eventually be rolled into the existing octree module.

Last friday I had a very useful chat with Radu, Federico and Mattia about the TRCS and ANF. Unfortunately Jorge was not able to be present.

A quick summary of the conversation:

- For the time being there are 3 tasks at hand that Mattia and I can work on:
- Test current PCL filters on Trimble data sets.
- Bugfix and patch/modify filters that do not perform as they should.
- Research and brainstorm for new filter implementations.

- For further filter testing during this TRCS, a more quantifiable error metric is useful. Radu suggested to create a target point cloud that has the noise manually removed and compare that target with the filter results. This needs to be further discussed with Jorge since he will know best what noise is in need to be removed. Another topic of further discussion relates to currently defining noise solely as point removal, though point cloud smoothing is also interesting. Alexandru-Eugen Ichim is currently working on an algorithm also used for point cloud smoothing: http://www.pointclouds.org/blog/tocs/aichim/index.php.
- Radu mentioned that shadow point removal is easily implementable and already being used on their PR2. Related papers: http://researchcommons.waikato.ac.nz/bitstream/handle/10289/3828/Mixed%20Pixel%20Return%20Separation.pdf and http://www.robotic.de/fileadmin/robotic/fuchs/TOFCamerasFuchsMay2007.pdf.
- The current PCL bilateral filter only changes intensity. A new filter implementation based on the bilateral filter would act on the 3D coordinates. Federico is very knowledgeable in this field. A link that came up during the chat: http://people.csail.mit.edu/sparis/bf/
- Federico mentioned the topic of integral images in 3D, which would be useful for the 3D bilteral filter and for fast filtering. Mattia has shown interest in working on this implementation.
- For those filters that use a searching algorithm, which are the more powerful filters, the searching is currently the most time consuming aspect. Michael, Radu and Suat are discussing the possibility for adding a SphericalOrganizedNeighbor search, useful for LIDAR scans.
- For vegetation removal; remove areas with high curvature; analyze neighborhoods differenly.
- For LIDAR scans; take note that point cloud density is not uniform.
- For exploiting GPU parallelism; implementations will stay in trunk till PCL 2.0

I have updated *my roadmap* and will start work on the new error metric; creating the benchmark target (segment from Statues_1.pcd) that has its noise manually removed.

Hi, I’ve commited the code for NURBS fitting to svn last week. There’s still some clean-up necessary but the code should work fine!

I’m now starting with optimization methods for reconstructed NURBS surfaces: After fitting a set of NURBS to a point cloud, a couple of things have to be done to make a nice CAD model out of it. E.g. gaps have to be closed, intersecting NURBS need to be trimmed, ...

To deal with this issues I’m working on a global optimization approach, where I solve a linear equation in the least square sense, so that the NURBS still approximates the point-cloud while meeting the constraints (formulated as weak) mentioned above.

Hi evreybody. I finally managed to test the algorithm on the Trimble datasets. So here are some results.

A first analysis of the filters available in PCL, showed that only two can be considered almost valid for the removal of “ghost points” and vegetation. Though a good setting of the parameters can return good results for the removal of shadows and moving objects, the results were not as satisfactory for the removal of vegetation. As a point of reference I took a cloud in which the trees are covered with many leaves. The goal was to minimize the leaves which are described as small groups of flying points in a PointCloud.

**Results with RadiusOutlierRemoval** (in the right the original cloud, in the left the filtered one):

**Results with StatisticalOutlierRemoval** (in the right the original cloud, in the left the filtered one):

In conclusion I can say that neither of the filters is actually able to offer an intelligent removal of vegetation without damaging the “good” background of the point cloud.

I have started validating the existing registration algorithms on Trimble data. I am beginning with validating Point to Plane ICP on the entire dataset. Its going to take a few hours for the algorithm to finish given the size of the dataset. On the implementation side, I have implemented CorrespondenceEstimation based on normal shooting. A new class CorrespondenceEstimationNormalShooting should be available in the trunk shortly. The reference to this method is “Efficient Variants of ICP” as mentioned in my previous blog post.

Spent some time with Radu going over some of the issues related to the current PCL Visualizer. We were able to knock down memory performance and rendering speed for larger datasets. In the current implementation we had multiple copies of the cloud used by PCL and VTK.

For the time being we updated the visualizer to run in immediate mode which should speed things up significantly, while taking a hit during the creation of the display list. This won’t work for applications which require a more interactive session, i.e. filtering.

I just came back to NY from a conference in CA. Currently, I am coding in order to test some thoughts. This time, I am looking at the intensity difference between two corresponding points in two inputs. In order to figure out the corresponding pairs, I adopted nearest neighbour searching method which was similarly used in pcl:SegmentDifferences. This time, I tried to ignore the location difference. So if the distance between them was too large, the two matching points were avoided any further processing.

I actually care about the two papers on the floor in target more. The result is the intensity difference between the two inputs. We could see the papers were successfully detected because they have higher values. We could some other major differences were detected. Please ignore the noise on the background this time.

Reference

Target

Result

For the purpose of the filtering of ghost points, me and Florentinus decided to take as reference the dataset Statues_1. Inspecting that scenario, we marked as possible ghost noises some points highlighted in the following images:

The inspected methods are:

- PassThrough< PointT >
- RadiusOutlierRemoval< PointT >
- ConditionalRemoval< PointT >
- StatisticalOutlierRemoval< PointT >
- VoxelGrid< PointT >
- ApproximateVoxelGrid< PointT >

It turned out that only two methods can be succesfully used for our purpose: the *RadiusOutlierRemoval* and the *StatisticalOutlierRemoval*.

**RadiusOutlierRemoval**: the user specifies a number of neighbors which every indices must have within a specified radius to remain in the PointCloud. Because the filter is based on a fixed radius, many points on the object contours are deleted. Due to a good filtering result, the “good” objects are also affected.

**StatisticalOutlierRemoval**: for each point, it computes the mean distance from it to a specified number of neighbors. By assuming that the resulted distribution is Gaussian with a mean and a standard deviation, all points whose mean distances are outside an interval defined by the global distances mean and standard deviation are trimmed from the dataset.
This is so far the best method for the ghost deletion; moreover, it strictly depends to the parameters and the filter often removes portions of small “good” objects.

This is my first blog post for TRCS and I am pretty excited about this project. I was getting a hang of the registration pipeline in PCL and understanding the various modules over the last couple of days. I read the paper “Efficient Variants of ICP” which I recommend reading for people interested in understanding the variants of ICP. This should give a good idea on customizing the different modules of the PCL registration pipeline to suit your dataset. Also it helps in understanding the registration pipleline in PCL. We had a discussion on the modules where new algorithms can be added and we have shortlisted some for now. I will be working on pairwise registration to begin with.

Mattia and I have extracted a noisy segment from Statues_1.pcd that we will use as a benchmark to test the different filters in PCL. We have also constructed a timing benchmark that allows us to measure the algorithm’s speed more or less independent of platform. The desirable noise reduction and undesirable deformation are currently measured by our own (subjective) grading. In order to speed up the work, the extracted segment is deprived of NaNs and in the process also lost its organizing.

I have set up an initial testing project for myself a while before. Finally, I got great testing datasets. Trimble provided us with more than 2G data and Gordon collected several point clouds of his own garage. Since there are a lot of NAN values in Trimble data that could be removed by centain PCL functionality but I am kind of ‘lazy’ and have other options :D, I decided to choose two of Gordon’s data files of which details are shown as below to start.

- reference.pcd: baseline scan
- change.pcd: items on the floor moved a few cm, door moved, item moved from floor to table.

I have been studying how octree-based change detector works. Basically, double buffer structure and XOR bit pattern were used to decide which points are in the target but not in the reference. If you want to know which points are in the reference but not in the target, you just need to switch their roles.

Here are some first simple testing results. The major time-consuming part is not the stage of change detection, but the loading data part.

Reference

Target

Result

Everything is ready to start testing the PCL filters on Trimble data. I plan to talk with Florentinus, Jorge and Radu to define the “noise” and the expected accuracy of the filters.

A few days ago I was thinking about how it would be better to organize the code and its structure, dependencies between classes. Because there will be several algorithms of segmentation, I have decided that it would be better if all of them will be inherited from some base class named Segmentation(or something like that). So based on these thoughts I have written the code and even managed to test it on some of the datasets I had.

Synthetic clouds

Noisy synthetic clouds

Real clouds

Point cloud that I found on the local forum

I hope that at the end of this week I will be able to commit the code. But right now I’m gonna run more tests and will give the code a proper form.

I am currently downloading data sets that Jorge provided for further testing. For the next couple of days I will test the currently existing PCL filters on them and analyze the type of noise in the sets. I will also attempt to set up a more detailed roadmap for the next phase of the sprint.

I’ve successfully compiled and ran Urban Robotics’ Octree code and unit tests. In doing so I created a new library named pcl_outofcore (which is likely to change), but is giving me a test bed for compiling their code.

There were minor code changes including switching the unit tests from Boost to GoogleTest:

```
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from PCL
[ RUN ] PCL.Octree_Build
[ OK ] PCL.Octree_Build (1584 ms)
[ RUN ] PCL.Bounding_Box
[ OK ] PCL.Bounding_Box (5 ms)
[ RUN ] PCL.Point_Query
[ OK ] PCL.Point_Query (356 ms)
[ RUN ] PCL.Ram_Tree
[ OK ] PCL.Ram_Tree (482 ms)
[----------] 4 tests from PCL (2427 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (2427 ms total)
[ PASSED ] 4 tests.
```

This created tree of files on disk which represent the octree. The depth of the directory structure is the depth of the tree. Each directory represents a branch or tree node:

```
├── 0
│ ├── ...
├── 1
│ ├── ...
├── 2
│ ├── ...
├── 3
│ ├── ...
├── 4
│ ├── ...
├── 5
│ ├── ...
├── 6
│ ├── ...
├── 7
│ ├── ...
├── ade37c05-a2bb-4da4-8768-0aaa4f67a0e7_node.oct_dat
├── tree_test2.oct_idx
```

Within each directory (node) we’ll find a JSON formatted metadata index file (oct_idx), binary point data (oct_dat) and multiple directories which are the nodes children:

```
{
"version": 2,
"bbmin": [0, 0, 0],
"bbmax": [1, 1, 1],
"bin": "ade37c05-a2bb-4da4-8768-0aaa4f67a0e7_node.oct_dat"
}
```

I’ve also updated my roadmap which now contains a bit more detail on where I’ll be going with a refactor of the current codebase.

I have started looking at how to do change detection based on Hausdorff Distance. Basically what I wish to do is looking for the nearest neigbour in the reference point cloud of each point in the target point cloud and then calculating the Euclidian distance. There is supposed to be a threshold that could be specified by users to segment out the differences. The whole thought is pretty similar to what we have now in pcl:getPointCloudDifference. I want to inject this functionality to the module pcl:octree.

I’ll be on my snowboard for the next week, so don’t be surprised if I don’t respond until Monday 6th of February.

I’ve got NURBS now fully functional integrated into pcl. But since I’m on holiday next week I will commit it to the svn-repo when I come back. So that I’m available just in case of problems.

Hi everybody. Today I finally figured out how to post blog entries. I also was able to install python and Sphinx on my PC. So from now on I don’t have to commit the rst files to see the result.

A little late I also figured out the blogging system. Had a couple of exams last week, the last one yesterday, so I’m now ready to start.

Regarding my plan for the next couple of days/weeks: I’m implementing NURBS basis functions so that afterwards I can modify my NURBS fitting functions to remove the dependency of openNURBS.

I am still running and hacking the current example on Octree-based detector. So far, I have not been quite sure where I should go with yet. Radu and Gordon gave me some suggestion on methods based on

- change detection in geometry space (e.g., XYZ)
- change detection in color space (e.g., RGB)
- combination of both

I was looking for some published literature on Octree-based method. Now according to the new suggestion, I would like to change my direction and find something new and interesting.

Today I realized a small program meant to test the filter functions. It permits to compare the differences on a point cloud before and after the filtering operation; moreover it is possible to change parameters without recompiling the program.

I have not spent a lot of time on TRCS since my last update; I am currently finishing up on some work here and will not be spending full-time on TRCS during upcoming week. I have discussed initial approaches for ANF with Mattia and Jorge and have slightly reworked my roadmap. Currently I am working on:

- Get to know the filters module in PCL; the code structuring, the different filters in there, how to use them, when to use them, basic understanding of their workings.

With the availability of Urban Robotics’ octree-based point cloud format I’ll be doing the initial integration of their work into PCL. I’ve made a few updates to my roadmap related to Urban’s octree format as well as some additional visualization tasks related to OpenGL and VTK.

In addition to Urban’s code I’ve started doing quite a bit of research trying finding the most relevant references related to the topic of out-of-core visualization. This topic spans a large problem set where octrees play a key role, but are not the end all be all solution.

The list of references on my blog are sure to grow as the sprint moves forward, but let’s get started with some of my initial findings:

- Hanan Samet

- The Design and Analysis of Spatial Data Structures
- Applications of Spatial Data Structures
- Spatial Data Structures*
- Claus Scheiblauer, Michael Wimmer and N. Zimmermann
- Oliver Kreylos, Gerald W. Bawden and Louise H. Kellogg
- Rico Richter and Jürgen Döllner

I started learning how to use Subversion and Sphinx. Tomorrow I will meet some PhD students of my university; I would like to collaborate with some of them and get access to some tools (cameras) to test the algorithms I will develop.

Today I managed to get Sphinx running and have been updating my personal and roadmap pages. For the remainder of this week I will be working on the following entries of my roadmap:

- Get familiar with the blogging system, commiting code, my mentor(s), my co-worker(s) and the other people of the TRCS.
- Gather information on the latest/best/most interesting work in the field of ANF.

1 2 3 | ```
#!/usr/bin/likepython
yo just print like "hello world!" bro
``` |