PCL Developers blog

Alexandru-Eugen Ichim

This is my personal page

project:Geometric object recognition
mentor:Radu B. Rusu (Aitor Aldoma [Zoltan-Csaba Marton, Vincent Rabaud, Nico Blodow, Michael Dixon])

About me

I’m a student at Jacobs University in Bremen, Germany.

My hobbies include:

  • Computer Graphics
  • Photography
  • Traveling

I invite you to visit my personal website for more info about myself and my work.


Here is a brief outline of my GSoC project milestones:

  • Fill out this page
  • Implement, test, and document a series of 3D descriptors useful for geometric object recognition
  • Design a framework for registration of point models using FPFH-like signatures
  • Given a VFH feature space, extend it for the problem of clustering similar models, and detect the correspondences between points
  • Design a feature descriptor that incorporates color and geometry for object recognition
  • Document, test, write samples for all the above

Click here for a detailed roadmap

Recent status updates

PFHRGB Feature results!
Wednesday, August 24, 2011

Cool results using the PFHRGB feature. It is based on the PFH feature which can be found in the following publication:

  • R.B. Rusu, N. Blodow, Z.C. Marton, M. Beetz. - “Aligning Point Cloud Views using Persistent Feature Histograms”, In Proceedings of the 21st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nice, France, September 22-26 2008.

The implementation of this feature in PCL currently uses a histogram of 125 bins. The PFHRGB feature we introduced uses an additional 125 bins for storing color information around the point of interest. By using Pararth’s feature evaluation framework, we concluded from a simple experiment that PFHRGB are more distinguishable than simple PFH features.

The experiment was carried out at 3 different feature search radii. A single cloud from the Kinect was used - the original input as source and a transformed version as target. After subsampling, feature signatures were calculated for each point. Correspondences are made by using a simple nearest neighbor heuristic in feature space. The results are the following:

  • PFH vs PFHRGB at search_radius 0.02: 4% vs 24% repeatability
  • PFH vs PFHRGB at search_radius 0.05: 37% vs 85% repeatability
  • PFH vs PFHRGB at search_radius 0.06: 46% vs 92% repeatability

To apply the newly implemented feature to a real world situation, I recorded 6 frames of my desk and incrementally aligned them by using PFHRGB features and RANSAC. The whole procedure took about 5 minutes for registering 5 pairs of clouds.

../../_images/pfhrgb_registration_clouds.png ../../_images/pfhrgb_registration_color.png
Work updates - a lot of different topics
Thursday, August 18, 2011

I started working on a lot of different things lately. A short description of every little branch will be given here, more details when I reach cool results.

  1. In a previous post, I mentioned the Surfel smoothing algorithm I implemented. A lot of modifications have been made to it, to make it closer to the original paper by Amenta et al. After this, benchmarking it against the Moving Least Squares reconstruction algorithm was necessary. The immediate conclusions are that MLS performs faster and with better reconstruction results and I am still working on understanding why Amenta ‘s smoothing method would be theoretically much more precise. Also, in this section, we are looking into adapting a smoothing algorithm with real-time performance for the Kinect.

  2. Next, I looked into trying to combine a few of the things I wrote this summer for a novel application for PCL. So, I decided to try hand pose/gesture recognition. I have successfully used the pyramid feature matching I talked about previously to get some rough initial classification results. I am now trying to use Support Vector Machines in combination with pyramid matching and different kinds of features in order to train classifiers to recognize hand poses recorded with the kinect. For those interested, I have recorded a rather extensive dataset with 8 different hand postures with 10 different images per class. They are uploaded in the dataset repository of PCL.

  3. SmoothedSurfacesKeypoint is up and running. Just need to add unit tests. What this algorithm does is extract features from a set of smoothed versions of the input cloud by looking at the “height” differences between the points smoothed with different scales.

  4. As for bulky programming tasks, I proposed the pcl_tools idea on the mailing list and contributed with a few tools such as: add_gaussian_noise, compute_cloud_error, mls_smoothing, passthrough_filter. Also, two new apps for the kinect have come to life: openni_mls_smoothing and openni_feature_persistence - which can be found in the apps/ folder.

  5. Not all of them are currently commited to trunk, but I have been working on creating new features by integrating color information into 3D feature signatures. So, today I started using Pararth’s feature benchmarking framework to compare the performance of each feature signature.

  6. Last, but not least, I contacted the Willow Garage people working with Android and trying to help in that direction too.

Over and out!

Interest Region Extraction and other things
Saturday, July 23, 2011

I have not updated my blog for a substantial period because I have not done any cool things until the last few days. I have spent most of my time enhancing the code I have written so far during this summer. All the ‘boring’ bits: documentation, unit testing, small bug fixes and optimizations here and there have been the things eating up my time.

As for the more interesting subjects: I have created a new architecture, as agreed upon with other PCL developers, for the MultiscaleFeaturePersistence. It can easily be used with any type of feature extractor/descriptor and I am attaching an example of the results using FPFH, as it was done in the paper describing the method:

  • Radu Bogdan Rusu, Zoltan Csaba Marton, Nico Blodow, and Michael Beetz - “Persistent Point Feature Histograms for 3D Point Clouds”, Proceedings of the 10th International Conference on Intelligent Autonomous Systems (IAS-10), 2008, Baden-Baden, Germany.

In a few words, this algorithm should extract only the features that have signatures distinguishable from the mass of features found in the respective scene, such as corners in an office scene as the one in the image. (Please note that the color image is not calibrated with the depth image with my Kinect under OS X [did not find a way to calibrate it properly])

../../_images/office_multiscale_feature_persistence_fpfh.png ../../_images/table_scene_multiscale_feature_persistence.png

Another thing worth mentioning is the StatisticalMultiscaleInterestRegionExtraction, inspired by the work Ranjith Unnikrishnan has done for his PhD thesis. I have polished and tested the algorithm and it is now in working state. In a previous blog post I said some things about the fact that I need geodesic distances between all point pairs inside the input point cloud and also geodesic radius searches in order to do the statistical computations. It seems that the method I went for initially is considered one of the efficient ones, which is building a graph where edges are created between each point and its K nearest neighbors (I chose 16 as a decent number - the results made sense). Then, I create an NxN distance matrix, as the output of the Johnson’s search algorithm for sparse graphs. For radius searches, I simply go through a whole row of distances in the matrix and extract only the ones below my threshold (An O(N) that can be further improved to O(logN) by using binary trees at the expense of some additional loops in the initialization). The result of this, as applied on the Stanford dragon:

Surfel Surface Smoothing and Pyramid Feature Matching
Tuesday, July 05, 2011

A few days have passed and it’s time for a new blog post :-).

To begin with, I added a useful new algorithm to the PCL library: Surfel smoothing based on the work of:

  • Xinju Li and Igor Guskov - “Multiscale features for approximate alignment of point-based surfaces”

It is an interesting iterative modification of the Gaussian mesh smoothing. Right now only the smoothing part works, the salient point extraction seems to need more testing, as I am not totally convinced about the results. Nevertheless, you can see a picture below of the effect:


Another interesting result I have is the use of:

  • Kristen Grauman and Trevor Darrell - “The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features”

I have implemented a generic class for comparing surfaces represented by unordered sets of features. The class then returns a value in [0, 1] representing the similarity between the two surfaces. To test it, I tried the very naive, raw representation of PPF features (i.e., pair features between all the pairs in the cloud) and the results are rather interesting:

  • my face with two different expressions: 0.867585
  • my face compared to a watering can: 0.586149
  • watering can compared to a chair: 0.399959
  • obviously, any object compared to itself: 1.0
Updates about my progress
Wednesday, June 29, 2011

A lot has happened since my last update. I have read a lot about applying the multiscale concept in 3D feature representation, as I found this field was not exploited enough - and it is very successful when applied to 2D images.

As such, I implemented the multiscale feature persistence algorithm proposed by Radu et al. in “Persistent Point Feature Histograms for 3D Point Clouds”. Also, I tried the rather slow method introduced by Ranjith Unnikrishnan in his PhD thesis. I am still at the stage of tuning the algorithms, but they seem to work as expected so far. In this process, I ran into software engineering problems again - having to decide on the architecture of new abstract classes in the PCL framework - for this I asked the help of the other developers and will decide on the best solution after some thorough discussions.

Another interesting problem I faced was that of geodesic distances inside the point clouds. There are quite a few methods to solve this problem - most of them refer to creating a graph representation of the point cloud using different heuristics. I am not yet convinced about the ideal version for this, so I will be looking into it in the next days.

Furthermore, I contacted the author of an algorithm I implemented a few weeks ago - Slobodan Ilic to ask for some hints about his own experiments using noisy data. After following some of his advice, I managed to tune the parameters so that it works satisfactorily on Kinect datasets such as my dad’s garage :-) - where it detects a chair and a watering can:


I forgot to mention that I received my very own Kinect and spent some time playing with it (under Mac OS X - and it works after a looot of tweaks and hacks!!!). I will be collecting all the datasets I am using for my own experiments around the house and will upload them in a structured way to the PCL dataset repository: svn+ssh://svn@svn.pointclouds.org/data