PCL Developers blog

Pararth Shah

This is my personal page

email:pararthshah717@gmail.com
project:Point Cloud Registration in PCL
mentor:Michael Dixon (Jochen Sprickerhof [St├ęphane Magnenat, Dirk Holz, Nico Blodow, Vincent Rabaud, Radu B. Rusu])

About me

I’m a student at Indian Institute of Technology - Bombay.

My hobbies include:

  • Coding
  • Debugging
  • Writing documentation

Project summary / Roadmap

To aid in the development of new feature descriptors and registration algorithms, we plan to create a set of tools to automatically test different algorithms and compare their speed and effectiveness. We will then implement several new feature descriptors and registration techniques that have been proposed in recent scientific literature and evaluate them on a set of benchmark data sets. All new code will be thoroughly documented and tested, and we will create a series of tutorials to help new users understand how to apply these new features and registration algorithms in their own code.

Click here for a more detailed roadmap

Recent status updates

Tutorial For Using The Benchmarking Class
Monday, August 15, 2011
../../_images/gsoc11.png

I have added a tutorial which explains the functionality of the FeatureEvaluationFramework class, for benchmarking feature descriptor algorithms. Also, the tutorial explains a sample use case, which is “determining effect of search radius on FPFHEstimation computations”.

I have mentioned how the Framework can be extended to include testing of many (all?) feature algorithms, quite easily.

Commandline Tool For Benchmarking Features
Sunday, August 07, 2011
../../_images/gsoc11.png

Sorry for the late post, but the progress since my last post includes a commandline tool for benchmarking feature descriptor algorithms, which I have added under /trunk/test/

On a related note, I have written a simple tool for extracting features from given pointcloud, which is under /tools/extract_feature.cpp. Currently it supports the PFH, FPFH and VFH algorithms, and I will be adding more. An issue I am facing is of how to dynamically select the point_type of the input cloud, depending on the input PCD file, and currently it is converting all input clouds into PointXYZ types.

Extracting Keypoints And Performing Registration
Wednesday, July 06, 2011
../../_images/gsoc11.png

I have completed the Feature Evaluation Framework class, except for (i) integrating keypoint extraction, and (ii) registration of source and target clouds. Adding these functionalities will require some reading by me, which I plan to do currently.

Also, I am playing with various methods of visualizing the output of the FeatureCorrespondenceTests, and I intend to first run a series of tests on Feature algoithms to gather a set of benchmarking results and then devise a way to visualize it.

I am stuck up with a minor problem involving taking the ground truths as input, specifically converting an Eigen::Vector3f and Quaternion to a Eigen::Matrix4f representation. Hopefully I’ll find something useful in the Eigen documentation.

Finalising The Feature Test Class
Friday, July 01, 2011
../../_images/gsoc11.png

This week I have been focusing on finalising the Feature Test class, to be used for benchmarking of Feature Descriptor algorithms. The class supports the following pipeline:

  • loadData (source and target clouds, ground truth)
  • setThreshold (either a single threshold value, or a threshold range, specified by lower bound, upper bound, and delta)
  • setParameters (specific to the Feature Descriptor algorithm, given as a “key1=value1, key2=value2, ...” string)
  • performDownsampling (filter input clouds through VoxelGrid filter, with specified leaf size)
  • extractKeypoints (extract keypoints from the downsampled clouds)
  • computeFeatures (compute features of the keypoints if extracted, or else on the preprocessed clouds)
  • computeCorrespondences (match the source and target features) (or) computeTransformation (register the source and target clouds)
  • computeResults (evaluate total successes and/or runtime statistics)

Important tasks include:

  • The FeatureEvaluationFramework class should support running multiple tests over a single independent variable, eg input clouds/ threshold/ parameters/ leaf size, etc
  • The output of each set of runs should be published in CSV format, which can be plotted in a graph/table
  • An executable “evaluate_feature” located in “trunk/test/” should allow running of multiple tests by choosing an independent variable and other input values, through the command line
  • It should also support quickly running some predefined standard tests on Feature algorithms, to compare the results of newly implemented algorithms with previous ones
Effect of Downsampling on Feature Computations
Friday, June 24, 2011
../../_images/gsoc11.png

This week I have been running tests on the benchmark dataset using the FeatureEvaluationFramework class.

As suggested by Michael, I have added functionality to the Framework class to preprocess the input clouds, i.e. downsample them using VoxelGrid filter, to reduce the running times of feature computations.

For this, I have modified the Framework class, as well as the FeatureCorrespondenceTest class. The major changes to code are: (reflected here)

  • Added functions to control preprocessing of input clouds, using VoxelGrid filter to downsample the clouds.
  • Restructured the functions for running the tests, added a function to run tests over a range of leaf sizes (of VoxelGrid filter).
  • Added a minimal TestResult class to store the results of each test case as a separate object. Functions can be (will be) added to this class for stuff like printing to CSV, publishing to ReST, plotting graphs, etc.

I used this class to run FPFHEstimation algorithm on a sample dataset (cloud_000.pcd) for various values of the leaf sizes. Here are the results:

Feature Name: FPFHEstimation Parameters: threshold=0.01, searchradius=0.003 Dataset: cloud_000.pcd Input size: 307200

Machine Config Intel Core 2 Duo P8700 @ 2.53 GHz, 4GB RAM, on Ubuntu 10.10

Testcases:

Leaf size Preprocessed Input Size No. of Successful Correspondences Time Taken For Source Features Time Taken For Target Features Total Time For Feature Computation
0.5 28 1 0 0 0
0.1 369 56 0 0 0.01
0.05 1232 1219 0.01 0.02 0.04
0.01 22467 22465 3.18 3.21 6.78
0.007 40669 40667 10.69 10.74 22.47
0.005 69912 69910 31.86 31.82 66.42
0.001 234228 234226 671.67 674.85 1390.69
0.0007 235729 235727 729.75 722.02 1497.79

Note: In case of FPFHEstimation, the total time taken for Feature computation includes the time for calculating normals of the input points.

Anyone volunteering to provide benchmark results using this class (alongwith you machine config) is highly appreciated.