PCL Developers blog

Manuel Gesto

email:mgesto@usal.es
project:Object Discovery in Kinfu Data
mentor:Federico Tombari

About me

I am a Phd student that will be working on GSOC.

Recent status updates

Final Example “Final example” (Fourth post)
Monday, August 18, 2014
../../_images/gsoc147.png
  • Introduction

Once we have the segmentation, and the functions to make the measures of the segmented candidates to objects, we need to create a example where we analyze all segments and we decide which ones are objects or not. Also the objects that have more similarities in the measures, we can classify into a category.

  • FrameWork

This tutorial gives an example of to implementation of the algorithm present in:

“Andrej Karpathy and Stephen Miller, and Li Fei-Fei”, “Object Discovery in 3D Scenes via Shape Analysis”, “International Conference on Robotics and Automation (ICRA)”, “2013”

../../_images/object_discovery.png

As the autors said in the abstract: “We present a method for discovering object models from 3D meshes of indoor enviromments. Our algorithm first descomposes the scene into a set of candidate mesh segments and then ranks each segment according to its “objectness” a quality that distinguishes objects from clutter”.

In this example we will make all the framework to obtain the segmented objects at first and then calculate some geometric measures to make a clasification of the “objectness of the segmented candidates. After that we save all the “good” (the segments that the algorithm think are a object) and the measures asociated to each object. This can be used by the Recurrence to improve the clasification of the objects.

We begin with the based graph segmentation that was explained before. We made a over-segmentation of the meshes. To obtain diferent segments for each scene. We need to make this segmentations in all scenes that we want tto use for Object Discovery.

After we have the segmentation, we need to select the object-like segmentes. For that we apply some test to discard the not correct segments.

-Min and Max number of points. If the segment dont have the engouh points or is really bigger, we discard this object.

-After we take a look about the shape, we made two test, one about the shape, if is really flat or thin we discard this segment. Also we look into the size in the 3 axes. If this size is under or over a threshold is also rejected the segment.

-The next test is about diferent geometrical measures. This meauses where explained in the second post of this blog. We compute all this measures for each segment and we store this value. This will be used in the next test.

-Due to, we make a over-segmentation with different thresholds, probably will have the same segment more than one time. For that we make another test for each scene. This test name non-max supresion consist in look into all the segments with a result of more than 80% for a intersection ove union. If they have more than this, we compare the value of the measures, the object with the hightst value of this measure is keep and the other is discarded.

This is the last test. Now we show the results for a couple of scenes.

Scene 1

Initial scene
../../_images/scene_26.png

Object-like segments presents in the scene

../../_images/scene_26_segmented.png
Initial scene
../../_images/scene_27.png

Object-like segments presents in the scene

../../_images/scene_27_segmented.png

We store for a great number of scenes, all the segments and measures for each segment. Finally in the next step we add another measure, the recurrence.

More snapshots of some final results from different scenes:

../../_images/0.png ../../_images/11.png ../../_images/21.png ../../_images/31.png

Last Step: Recurrence

We are going to compute the recurrence in the previous results. For that we need to have the results of the object discovery. This recurrence consist in analyze all the segments in all the scenes, because segments that are commonly found in other scenes are more likely to be an object rather than a segmentation artifact. For that in the paper explains that, that the best way to measure the distance between the top k most similar segments is to find the objects with a 25% of extent along principal directions in size and measure the euclidean distance between their normalized shape measures. This is the best option, taking care the computacion time. We can use in the case other approachs like icp, ransac or local descriptors, global descriptors. In this case, for all objects founded in the segmentation proccess from before, we are going to obtain this score and find the 10 most similar objects.

After aplying the Recurrence we show the next results:

1st Query for an object

../../_images/recurrence1.png

This case is really good (the implementation works!!!), and make more easy to understand the paper. After all the work, we obtain a possible object in a scene ( a cup) also with the aid of all scenes, give us a measure quite confident to clasify this objects. How much big is the number of scenes, this score improves its quality. This means, more scenes more quality.

2st Query for an object

../../_images/recurrence2.png

3st Query for an object

../../_images/recurrence3.png

4st Query for an object

../../_images/recurrence4.png

Finally I think that I accomplish the main goal of the GSoC program. That was implement the algorithm in the PCL format.

To use this program and to look into the tutorials to we need to wait to be acepted a possible pull request with the source code, the 2 class that I create into the PCL website and in the PCL library.

that’s all folks!!!

P.S. I will update the post with links if I get the pull the request of the code that I created during GSoC.

Second implementation of Code “Segmentation and creation of object candidates” (Third post)
Monday, August 11, 2014
../../_images/gsoc147.png
  • Introduction

The implementation of the Efficient Graph-Based Segmentation, base on:

Efficient Graph-Based Image Segmentation Pedro F. Felzenszwalb and Daniel P. Huttenlocher International Journal of Computer Vision, Volume 59, Number 2, September 2004
  • Work

Due to license problems, I had to re-implement the code of the segmentation, also I need to implement the same code to use in Mesh and Point Clouds. To use in images, we need to create a graph, based on a image. To improve the results, In the paper recommend to smooth the image before the segmentation. I use the gauss filter implemented in PCL, for this purpose. After the code was written, and with the access to original code, I made some test to check if the segmentation was good.

The first test ,was about the segmentation without the gauss filter applied for each case and after, I made the same picture with the gauss filter .

  • Results

Without Gauss Filter(Original Code)

../../_images/originalwithouts.png

As we can see in this image, there are a lot of segments, but if we compare this image with the next image, the adapted code is the same segments in both of them. The colors are different, because the colorization is generated random.

Without Gauss Filter(Adapted Code)

../../_images/adaptedwithouts.png

Then a test, with a sigma for the Gaussian filter of 0.5, and a threshold for the algorithm of 500 was made in the same image for both cases.

With Gauss Filter (Original Code)

../../_images/originalwiths.png

With Gauss Filter (Adapted Code)

../../_images/adaptedwiths.png

Here we can make that, there are some differences. After some research in the code. I found that, when we compute the weight for the graph, there are some minimal differences, this gives us the idea, that the smooth of the image, is different in both cases. Is small difference, some decimals, but this creates a different result. I didn?t make more research about it, because is not part of the GSoC task.

  • Point Cloud segmentation

Once we have the graph segmentation working, the next step is made the algorithm to use with Point Clouds. For that, the only thing that is needed to use, is create a new graph based on the difference of normal or curvature.

First implementation of Code “Objectness measures” (Second post)
Wednesday, July 30, 2014
../../_images/gsoc147.png
  • Introduction

I begin with main part of the code, and probably the most easy to implement. In this part I write the code to make this measures for each Point Cloud. The input in the code is a point cloud (or set of pointclouds) and the output a vector with the values of the resultant measure. To simplify the code part and the use of this for other aplications,this part is separated from the segmentation, also if anyone want to use other segmentation that the proposed in the paper is usefull.

  • “Objectnes measures”

We have 5 different measures:

  • Compactness: This measure looks into how compact is a object. Search the minimum bounding sphere that contains all the surface of the mesh of the pointcloud.
  • Symmetry: This measure analyze the reflective symmetry along the three principal axes, based on the eigenvalues of the scatter matrix.
  • Smoothness: This measure evaluate the quality of the points of the segments, if the segments have points uniform spread around it, this will score high, if have spread points this will be have a low score.
  • Local Convexity: This measure determine the convexity of each polygon edge in the mesh of the point cloud,and score each segment by the percentage of its edges which are convex.
  • Global Convexity: This measure is generated by a convex hull, and after that a mean distance between the points and the convex hull isrecorded to create this measure.
  • “Future Work”

In the future work, I need to implement the segmentation to give data to measure the objects and test this code.

Object discovery in KINFU DATA (First post)
Sunday, July 20, 2014
../../_images/gsoc147.png
  • Introduction

The principal goal of this project will be implement the algorithm and app developed in:

-Fei-Fei, L., Karpathy, A., Miller, S.”Object discovery Via Shape Analysis”.

This automatic object discovery will be useful for robotics or autonomous vehicles and it will be able to find different class of objects. This new approach compares different parameters of an object to classify it into a class and diffence the objects. The parameters that defined a object is based on the: “Objectness measures” part in the paper. This is the core of the app and the most important part. And probably a good part to begin to code because is quite easy to code.