PCL Developers blog

Google Summer Of Code

PCL has joined the Google Summer of Code program for the second time. This edition brings us 11 students from all over the world, who will be working on a variety of topics regarding 2D/3D sensor data processing. Google is generously sponsoring PCL development during the summer of 2012.

../_images/pcl-gsoc12-logo.png

For a complete list of all the present and past PCL code sprints please visit http://www.pointclouds.org/blog.

Latest 15 blog updates

Wrap-up posting for 3D edge detection
Tuesday, August 28, 2012
../_images/gsoc121.png
../_images/screenshot-1341964387.png

My primary goal of GSOC‘12 was to design and implement a 3D edge detection algorithm from an organized point cloud. Various edges are detected from geometric shapes (boundary, occluding, occluded, and high curvature edges) or photometric texture (rgb edges). These edges can be applicable to registration, tracking, etc. Following code shows how to use the organized edge detection:

pcl::OrganizedEdgeFromRGBNormals<pcl::PointXYZRGBA, pcl::Normal, pcl::Label> oed;
oed.setInputNormals (normal);
oed.setInputCloud (cloud);
oed.setDepthDisconThreshold (0.02); // 2cm
oed.setMaxSearchNeighbors (50);
pcl::PointCloud<pcl::Label> labels;
std::vector<pcl::PointIndices> label_indices;
oed.compute (labels, label_indices);

pcl::PointCloud<pcl::PointXYZRGBA>::Ptr occluding_edges (new pcl::PointCloud<pcl::PointXYZRGBA>),
        occluded_edges (new pcl::PointCloud<pcl::PointXYZRGBA>),
        boundary_edges (new pcl::PointCloud<pcl::PointXYZRGBA>),
        high_curvature_edges (new pcl::PointCloud<pcl::PointXYZRGBA>),
        rgb_edges (new pcl::PointCloud<pcl::PointXYZRGBA>);

pcl::copyPointCloud (*cloud, label_indices[0].indices, *boundary_edges);
pcl::copyPointCloud (*cloud, label_indices[1].indices, *occluding_edges);
pcl::copyPointCloud (*cloud, label_indices[2].indices, *occluded_edges);
pcl::copyPointCloud (*cloud, label_indices[3].indices, *high_curvature_edges);
pcl::copyPointCloud (*cloud, label_indices[4].indices, *rgb_edges);

For more information, please refer to following codes in PCL trunk:

It was a great pleasure to be one of the GSOC participants. I hope that my small contribution will be useful to PCL users. Thank Google and PCL for the nice opportunity and kind support. Lastly, thank Alex Trevor for mentoring me.

Benchmarking PNG Image dumping for PCL
Thursday, August 23, 2012
../_images/gsoc1211.png

Here is the result we got from PNG dumping benchmarking: 640x480 (16 bits) depth map + 640 x 480 (24 bits) color image.

08-23 20:57:43.830: I/PCL Benchmark:(10552): Number of Points: 307200, Runtime: 0.203085 (s)
08-23 20:57:54.690: I/PCL Benchmark:(10552): Number of Points: 307200, Runtime: 0.215253 (s)

If we are dumping the result to the /mnt/sdcard/, we are getting:

08-23 21:02:23.890: I/PCL Benchmark:(14839): Number of Points: 307200, Runtime: 0.332639 (s)
08-23 21:02:40.410: I/PCL Benchmark:(14839): Number of Points: 307200, Runtime: 0.328380 (s)

There is a significant overhead (about 0.1 second!) with the SD Card I/O.

We shall verify these with a faster SD card. The /mnt/sdcard seems to be mounted onto an internal SD Card on the Tegra 3 dev board that I have no access to? I tried to open the back and so already.

Also, I have tried different compression levels and it seems that level 3 is giving the best compression ratio vs the speed. More plots will come next to justify my observations.

Code Testing
Tuesday, August 21, 2012
../_images/gsoc1211.png

Have gone through the whole setup again and replicated on Ubuntu 12.04 + Mac OSX 10.8 environments. The README file is now updated to reflect what is needed to have the environment setup.

At the end, it was only 4 scripts, and maybe we can automate these completely. We have also added the scripts for dumping the images store in the SD Card. check out all .sh files in the directory and that may save hours of your time.

Cloud Manipulations!
Monday, August 20, 2012
../_images/gsoc126.png

Just a quick update to let you know that you can now manipulate clouds using the mouse. This allows you to do some pretty interesting things when combined with other tools, like Euclidean Clustering.

For instance, in the image below, there are three frames. The first (left-most) shows the original point cloud. I use the Euclidean clustering tool on it, which allows me to pull out the hand as a separate cloud, and select it (now shown in red in the middle). I can then use the mouse manipulator to move the hand as I please, and, for instance, have it pick up the red peg (right most frame).

../_images/ManipulateHand.png

Everything is coming together; the core of the app is now in place... basically I just need to implement the rest of the PCL tutorials as plugins, and you can do basically everything PCL can do... with the exception of working with movies.

Drag and Drop(dnd) Support and More Filters
Sunday, August 19, 2012
../_images/gsoc1212.png

dnd support inside the scene tree is added, the users can drag and drop point cloud items to different render window, which will help them observe the point cloud. It will be nice to add more dnd support, for example, drag point cloud directly from the file system to the modeler app, there can be many such nice features, and I will add them after GSOC. And new filters can be added very easily to the current framework. But there seems an unknown exception thrown from file save module, which crashes the app, I will anaylse what’s the problem and finish file/project load/save things.

A new point type to handle monochrome images is now available on the trunk!
Saturday, August 18, 2012
../_images/gsoc124.png

I’ve developed a new point type to handle monochrome images in the most effective way. It contains only one field, named intensity, of type uint8_t. Consequently, I’ve updated the pcl::ImageViewer so that it is able to manage the point clouds related to the new point type, named pcl::Intensity. Finally, I’ve changed the PNG2PCD converter by adding more functionalities: now the user can choose if the written cloud should be based on pcl::RGB or on pcl::Intensity. For more information, please see the documentation related to each single class or file.

Lossless Image Dumping with libpng + Android and Tegra 3
Friday, August 17, 2012
../_images/gsoc1211.png

Today, I’ve added the support of libpng + libzlib for the Tegra 3 project and so we can dump the raw images from the Kinect (or any OpenNI supported devices) onto the SDCard for post-processing or debugging. After hours of fiddling with the parameters and hacking away on the code, now we can capture and compress 4-6 images per second (2-3x 24-bit RGB image + 2-3x 16-bit depth image) on a Tegra 3. I believe these libraries are already NEON optimized and thus we shall be getting the best performance from them. Here is the little magic that gives me the best performance so far.

// Write header (16 bit colour depth, greyscale)
png_set_IHDR(png_ptr, info_ptr, width, height, 16, PNG_COLOR_TYPE_GRAY, PNG_INTERLACE_NONE, PNG_COMPRESSION_TYPE_DEFAULT, PNG_COMPRESSION_TYPE_DEFAULT);
//fine tuned parameter for speed!
png_set_filter(png_ptr, PNG_FILTER_TYPE_BASE, PNG_FILTER_SUB);
png_set_compression_level(png_ptr, 1); //1 is Z_BEST_SPEED in zlib.h!
png_set_compression_strategy(png_ptr, 3); //3 is Z_RLE

Next step, if time permitted I will use the PCL library compression code instead. Using the libpng, however, has taught me where the critical paths are and how we shall handle the data. Right now, I am sure that I wasn’t introducing any overheads from the data copying or manipulations. I was handling the raw data pointers the whole time.

For the longest time, I have had trouble getting any performance out from the Tegra 3, mainly because of the floating point operations! Again, avoid these operations at all cost unless we have a more powerful processor!

Here is a screenshot of some of the images that were dumped from my Kinect in real-time!

../_images/png_dump_screenshot.jpg
Tutorials, bug with pcd_viewer, PCLVisualizer and important additions
Thursday, August 16, 2012
../_images/gsoc127.png

Wrote tutorials for the 2D classes. Got another weird bug found by Radu. Plotter not working in pcd_viewer on point picking. Still struggling in it.

Tried to make PCLVisualizer cleaner and readable. Removed some unnecessary function calls. Didn’t commit yet.

Added some important functionalities in Plotter and a seperate vtkCommand event handler.

Placement for ICP Registration
Wednesday, August 15, 2012
../_images/gsoc1212.png

The initial position and orientation can be tunned in a dialog triggered by double clicking the point cloud item ICP registration, as shown in the following snapshot. And ICP paremeters are exposed to give more control to the users. I will add more workers, add project support and document the functions next.

Handling templated clouds, Double Dispatch in cloud_composer
Monday, August 13, 2012
../_images/gsoc126.png

So I finally decided to bite the bullet and add support for the templated classes to the GUI, rather than just using the sensor_msgs::PointCloud2. After some discussion on the dev boards and in irc (thanks to everyone for their input, especially Radu), I decided to put together a system which allows use of the templated classes even though point type is unknown at compile time. This is inherently kind of difficult, since templates and run-time polymorphism are two opposing concepts.

Before I get into the technical stuff, let me just say that the consequence of everything that follows is that for every CloudItem, we maintain both a PointCloud2 and a templated PointCloud<> object. Because of the way the model was designed to enable undo/redo, these are always synchronized automatically, since they can only be accessed through the CloudItem interface.

Everything is centered on the CloudItem class, which inherits from QStandardItem. For those who haven’t been following, the application is built around these types of items, which are stored in a ProjectModel object. There GUI is essentially a bunch of different views for displaying/editing this model. Anyways, clouds are always loaded from file as binary blobs (PointCloud2 objects). The header of the PointCloud2 object is then parsed to determine what the underlying data looks like. By doing this, we can figure out which template PointType we need, and using a nasty switch statement, can instantiate the appropriate PointCloud<T> object.

We then take the pointer to the PointCloud<T> and store it in a QVariant in the CloudItem (we also store the PointType in an enum). When we fetch this QVariant, we can cast the pointer back to the appropriate template type using the enum and a little macro. This means one can write a tool class which deduces the template type and calls the appropriate templated worker function.

One interesting thing popped up when I was doing this. The tools use run-time polymorphism (virtual functions) to determine what work function to call. That is, I manipulate base-class Tool pointers, and let the v-table worry about what type of Tool the object actually is. A problem arises with templated types though, since virtual function templates are a no-no.

To get around this, I worked out a double dispatch system - the visitor design pattern. This allows me to determine what code to execute based on the run-time types of two different objects (rather than just one, which could be handled by a vtable). The core idea here is that we first do a run time look up in the vtable to determine what tool is being executed, passing it a reference to the CloudItem. We then take the QVariant, deduce the PointType of the cloud it references, then execute the tool using the appropriate template type.

I’m sorry if none of that made sense... I’ll draw up some diagrams of how the whole application functions in a week or so, which should help alot. I’m now finishing up the code which allows manipulation of vtk actors with the mouse to be applied back into the model (with undo/redo as well).

tl;dr: Templates now work even though you don’t know the PointType at compile-time. Mouse manipulation of actors in the PCLVisualizer render window now works, and will shortly be properly propagated back to the model.

Openni interface for 3D edge detection
Friday, August 10, 2012
../_images/gsoc121.png

An openni interface for 3D edge detection is added in PCL trunk. Once it starts, you can show and hide each edge by pressing the corresponding number:

  • 1: boundary edges (blue)
  • 2: occluding edges (green)
  • 3: occluded edges (red)
  • 4: high curvature edges (yellow)
  • 5: rgb edges (cyan)

The high curvature and rgb edges are not enabled for fast frame rate, but you can easily enable these two edges if you want to test.

A new tool for PNG to PCD conversions is now available on the trunk!
Friday, August 10, 2012
../_images/gsoc124.png

I’ve developed a simple utility that enables the user to convert a PNG input file into a PCD output file. The converter takes as input both the name of the input PNG file and the name of the PCD output file. It finally performs the conversion of the PNG file into the PCD file by creating a:

pcl::PointCloud<pcl::RGB>>

point cloud. Now, the PNG2PCD converter is available on the trunk version of PCL under the tools directory.

Supervised Segmentation
Friday, August 10, 2012
../_images/gsoc122.png

The supervised segmentation is a two step process. Consisting of training phase and segmentation. In the training phase we extract the objects from the scene. We use the FPFH features as classifiers and as a prior assignment of the unary potentials of the CRF. We compute the FPFH histogram features for all points in one object. To reduce computation and feature comparisons in the recognition step we use a k-means cluster algorithm and cluster the feature into 10 classes. The training objects can be seen in the following image.

../_images/training.png

In the segmentation and recognition step we use the learned features to assign prior probabilities to a new scene. The prior assignment of the most likely label can be seen in the following image. As you can see in the image, many of the points of the objects we want to segment and recognize are not labeled correctly. This is because the distance of two FPFH features are two far apart. However, as a first initial estimate, FPFH features are well suited. The advantage of using these features is the fact that it only captures the geometry of the features and not color information. Whit this the training data set can be much smaller.

../_images/prior.png

As a second and to refine the assignment we use the fully connected CRF. The following image shows the segmentation and labeling after 10 iterations.

../_images/seg.png
Cloud Commands: Now with 99.9% Less Memory Leakage!
Wednesday, August 08, 2012
../_images/gsoc126.png

Today I reworked the Cloud Command classes and their undo/redo functionality so that everything gets properly deleted when the time comes. There are two different scenarios where a command needs to get deleted:

  • When the undo stack reaches its limit (set to 10 commands atm), we need to delete the command at the bottom of the stack (first in)
  • If we eliminate the existence of a command by undoing it, then pushing another command on the stack

These two cases have different deletes though. In the first case we need to delete the original items, since we’ve replaced them with the output of the command. In the latter case, we need to delete the new items the command generated, and leave the original ones alone.

Additionally, different commands need to delete data which is structured in different ways. For instance, a split command (such as Euclidean Clustering) needs to delete one original item or many created items, while a merge command needs to delete many original items or a single created item... and so forth...

Oh, on a side note, I’m not sure what I should have the app do when one tries to merge two Cloud items which have different fields. For now, I just say “Nope”... but there may be a way to combine them that makes sense. Unfortunately this currently doesn’t exist for PointCloud2 objects in PCL afaik. Anyone feel like adding some cases to pcl::concatenatePointCloud(sensor_msgs::PointCloud2&, sensor_msgs::PointCloud2&, sensor_msgs::PointCloud2&) that checks for different but compatible field types? For instance, concatenating rgb and rgba clouds could just give a default alpha value for all the rgb cloud points. This would be pretty easy to code if one did it inefficiently by converting to templated types, concatenating, then converting back...

Bugs and additional features
Tuesday, August 07, 2012
../_images/gsoc127.png

Fixing bugs takes time. PCLPlotter was behaving weird in pcd_viewer. A window was appearing just after the creation of an object of 2D classes (Plotter and Painter2D) without even the call of plot/spin/display functions. Thus, had to move vtkRenderwindowInteractor::Initialize() and therefore vtkRenderwindowInteractor::AddObserver() to the display triggering calls (plot/spin/display).

Added other small functionalities like setTitle* in Plotter.