PCL Developers blog

All blog posts

Final report and code committed
Friday, March 01, 2013
../_images/nvcs.png

Last two weeks I’ve finished the NPP implementation of the Haar Cascade detector, it is now fully ported to PCL. I’ve included Alex Trevor his organized multi-plane segmentation to people api as well, so that people can do background subtraction based on the planes found in the image. This week I wrote the final report summarizing the work done as well as how to use the module.

Design considerations and a short example
Saturday, July 28, 2012
../_images/nvcs1.png

Underlying containers

The first important design choice for the mesh implementation is the data structure in which the individual mesh elements (vertices, half-edges, faces) are stored. The three most suitable containers are the vector, deque and list (arrays are only good for meshes with a known size). The list has the advantage that pointers to its elements are not invalidated when elements are inserted or erased. This makes it possible to store pointers (iterators) to the connected neighbors directly in each mesh element. No further steps are required if elements are removed from the mesh. The disadvantage is a slower traversal through the list which is a very common operation, for example when the mesh is transformed or rendered.

The vector and deque provide very similar functionality while using a different memory management. For both it is not possible to directly store iterators in the mesh elements because these are invalidated when elements are inserted or erased in the middle of the container. Therefore it is necessary to exchange the iterators with indices to the connected neighbors. Since indices are not dereferencable one has to call a method of the mesh to access the element at the given index.

Another problem is that if one element is removed in the middle of the container all subsequent indices have to be shifted and all elements that store an index to a shifted index have to be changed accordingly. This is a very time consuming operation. Therefore it is better to mark the mesh elements as deleted instead of erasing them. When the user has finished all desired deletions a clean-up method has to be called that adapts all indices in one run.

In experiments that I did for my diploma thesis I noticed that the list is much slower compared to a vector for my intended application. I have to do further experiments comparing the vector with the deque so for now I will stick with the vector.

Topology & geometry

Although in a mesh the geometry is stored together with the topology I want to keep them separated for the implementation. I understand the mesh class as a container itself (a quite complex one) which provides methods for accessing and changing the topology while the user is responsible for the geometry: The mesh class should make no assumptions on, for example, the existence of a point position, normal, color, or any other possible data because they are irrelevant for the topology. It is even possible to define a mesh without any geometry at all. This might not be very useful for an application but it has the advantage that the definition of arbitrary user data is very easy:

// Define the mesh traits
struct MeshTraits
{
  typedef pcl::PointXYZ         VertexData;
  typedef pcl::geometry::NoData HalfEdgeData;
  typedef int                   EdgeData;
  typedef pcl::Normal           FaceData;

  typedef boost::false_type IsManifold;
};

// Define the mesh
typedef pcl::geometry::PolygonMesh <MeshTraits> Mesh;
Mesh mesh;

The example defines a non-manifold polygon mesh which stores a PointXYZ for the vertices, nothing for the half-edges, an integer for the edges and a normal for the faces. The data is stored in a pcl::PointCloud <T> and can be retrieved by the following methods:

Mesh::VertexDataCloud&   cloud = mesh.getVertexDataCloud ();
Mesh::HalfEdgeDataCloud& cloud = mesh.getHalfEdgeDataCloud (); // Note: Empty for the example above
Mesh::EdgeDataCloud&     cloud = mesh.getEdgeDataCloud ();
Mesh::FaceDataCloud&     cloud = mesh.getFaceDataCloud ();

The topology is accesed through the mesh. Each method must give the respective index:

// Vertex connectivity
HalfEdgeIndex ind = mesh.getOutgoingHalfEdgeIndex (vertex_index);
HalfEdgeIndex ind = mesh.getIncomingHalfEdgeIndex (vertex_index);

// Half-edge connectivity
VertexIndex   ind = mesh.getTerminatingVertexIndex (half_edge_index);
VertexIndex   ind = mesh.getOriginatingVertexIndex (half_edge_index);
HalfEdgeIndex ind = mesh.getOppositeHalfEdgeIndex  (half_edge_index);
HalfEdgeIndex ind = mesh.getNextHalfEdgeIndex      (half_edge_index);
HalfEdgeIndex ind = mesh.getPrevHalfEdgeIndex      (half_edge_index);
FaceIndex     ind = mesh.getFaceIndex              (half_edge_index);
FaceIndex     ind = mesh.getOppositeFaceIndex      (half_edge_index);

// Face connectivity
HalfEdgeIndex ind = mesh.getInnerHalfEdgeIndex (face_index);
HalfEdgeIndex ind = mesh.getOuterHalfEdgeIndex (face_index);

Circulators

The mesh provides several circulators:

  • VertexAroundVertexCirculator (clockwise)
  • OutgoingHalfEdgeAroundVertexCirculator (clockwise)
  • IncomingHalfEdgeAroundVertexCirculator (clockwise)
  • FaceAroundVertexCirculator (clockwise)
  • VertexAroundFaceCirculator (counterclockwise)
  • InnerHalfEdgeAroundFaceCirculator (counterclockwise)
  • OuterHalfEdgeAroundFaceCirculator (counterclockwise)
  • FaceAroundFaceCirculator (counterclockwise)

Incrementing the circulators around the vertex circulates clockwise while incrementing the circulators around the face circulates counterclockwise when looking at the mesh from the outside. The reason is that these operations don’t need to access the previous half-edge which might become an optional part of the mesh in the future. The circulators around the face can also be used to move along the boundary of the mesh.

Circulators are different compared to iterators in the aspect that they don’t have a distinct begin and end position. It is valid to circulate through a sequence endlessly. Usually one wants to access all elements only once. This can be achieved by the following procedure:

// NOTE: {...} stands for any of the circulators and (...) stands for the respective input

{...}Circulator circ     = mesh.get{...}Circulator (...);
{...}Circulator circ_end = circ;

do
{
  // do something
} while (++circ != circ_end);

Example

I added a short example to pcl-trunk/examples/geometry/example_half_edge_mesh.cpp. It defines a non-manifold mesh and creates a simple manifold topology which is traversed in different ways. Then it deletes two faces resulting in a non-manifold topology. If you change to a manifold mesh further faces are deleted in order to keep the mesh manifold.

Please let me know if you find any bugs. And since the implementation of the mesh is still evolving I am also very happy about comments regarding the API.

Representation of manifold and non-manifold meshes
Saturday, July 14, 2012
../_images/nvcs1.png

In this blog post I will talk about the representation of a mesh and how it is affected by the manifold property. Probably the most convenient way for the user to generate a mesh is to first add the vertices and then connect them by faces while the edges are created automatically. This way the mesh class can ensure that it holds only valid data. For example, if the user tries to add the same face twice then only the first insertion must be executed. Unconnected (isolated) vertices can be removed afterwards without changing the rest of the topology. The face is specified by a sequential list of vertices where the vertices are connected in a closed loop.

Manifold mesh

In a manifold mesh faces may be added only between boundary half-edges. We can make use of the property that each boundary vertex in a manifold mesh has exactly one incoming and one outgoing boundary half-edge. If a vertex does not lie on the boundary then all incoming and outgoing half-edges are not on the boundary as well. Therefore it is useful to ensure that the outgoing half-edge of a boundary vertex is a boundary half-edge. This way we can easily check if a face may be added without the need to circulate through all half-edges that are connected to each vertex in the new face. We can also check in constant time if an edge between two vertices is new or already contained in the mesh.

There are two possible problems that might be a concern for certain applications. The first problem is only related to a triangle mesh: If there are several separate components in the mesh then it is not possible to join them by only one triangle without avoiding non-manifold vertices (illustrated in the image below): The dashed triangle may not be added because the encircled vertex would become non-manifold. This problem can be circumvented by adding triangles in pairs in such cases. If this is not desired then one should consider using a quad or polygon mesh or a mesh that can handle non-manifold vertices.

Combining two separated components

The second problem appears when faces are removed from the mesh: The deletion of a face can result in non-manifold vertices. For example, if face number 5 is deleted from the mesh in the next image (left) the encircled vertex becomes non-manifold. This problem can be solved by deleting the neighboring faces until the vertex becomes manifold again (right): Starting from an edge in the deleted face (number 5) we circulate around the vertex and remove all faces (number 4 and 3) until the boundary is reached . This however, can result in further non-manifold vertices (encircled) which have to be removed iteratively until the whole mesh becomes manifold again. These operations are part of the implementation of the manifold mesh so the user does not have to worry about them. Again, if this behavior is not desired one should use a mesh that can handle non-manifold vertices.

Removing faces

Mesh with non-manifold vertices

There was a very nice description how to handle non-manifold vertices online however the site is currently down (please let me know if you see it online again). Although my implementation of the mesh is different in several aspects (mainly due to a different API) the basic idea is the same.

In a mesh with non-manifold vertices it can be no longer guaranteed that every boundary vertex has exactly one incoming and outgoing boundary half-edge. However it is still useful to store one of the boundary half-edges as the outgoing half-edge for boundary vertices. This way it is possible to check if a vertex is on the boundary or not but it becomes necessary to circulate around all neighboring half-edges in order to check if two boundary vertices are already connected.

The problem with non-manifold vertices is that the current connectivity around them might be incompatible with the insertion of a new face. However all possible consistent cases are topologically correct and the mesh does not know beforehand which faces are meant to be inserted next. Therefore it is necessary to make the half-edges adjacent before the insertion of a new face. The next image gives an example how the half-edges around a non-manifold vertex can be reconnected (the connectivity between the half-edges is represented as a continuous line). The insertion of a new face between the half-edges 3-0 and 0-2 (image on the left), for example can be done without any further intervention. However for an insertion of a new face between the half-edges 1-0 and 0-8 it becomes necessary to reconnect the half-edges around vertex 0 in such a way that the half-edges 1-0 and 0-8 become adjacent, as shown in the image on the right. Although in this case the operation resulted in other non-adjacent half-edges these can be corrected during the insertion of further faces.

Make adjacent with consistent orientation

The next image shows an invalid configuration that happens for certain situations. For example if we would try to insert a face between the edges 3-0 and 0-4 then the half-edges around vertex 0 would be reconnected in such a way that the closed loop of half-edges is split up into two loops. The problem is that vertex 0 stores only one outgoing half-edge which can point into only one of the loops. The other loop is then “unknown” by the vertex because it can’t be reached anymore by circulating around the vertex. This configuration is not supported by the half-edge data structure and must be avoided. Luckily all of these operations can be hidden inside the mesh implementation so the user does not have to worry about them.

Non-representable non-manifold vertex.

Mesh with non-manifold edges

The linked page describes how the half-edge data structure can be modified in order to handle non-manifold edges and non-orientable meshes:

The basic idea is to introduce virtual triangles between non-manifold edges. Virtual triangles are not exposed to the user but they allow keeping the mesh manifold internally. They also allow modeling non-orientable surfaces.

My implementation currently does not support virtual triangles. I think that the implementation of a non-orientable mesh with non-manifold vertices and edges would become really messy if it is not done right and I would like to test the mesh with non-manifold vertices first before continuing with more elaborate representations. Furthermore it is also worth considering other mesh data structures which might be suited better for representing arbitrary meshes. If there is a need for it I can have a more in-depth look into virtual triangles after my code sprint. But for now I want to keep things simple. In my next post I will give a short API overview of my current implementation and add everything to the trunk.

Half-edge data structure
Monday, July 02, 2012
../_images/nvcs1.png

The half-edge data structure is a popular representation for a mesh in which each edge is split up into two half-edges with opposite directions. The main topology information is stored in the half-edges. The vertices and faces are not explicitly connected to each other because they can be reached through the respective half-edges. If one wants to find, for example all faces to which a vertex is connected to then it is necessary to go through all neighboring half-edges and refer to the faces through them. Compared to explicitly storing references from each vertex or face to all of their immediate neighbors this has the advantage that the storage space per vertex, half-edge and face is constant while it is still possible to access the neighborhood without an exhaustive search through the hole mesh. Another advantage is that the orientation of the faces is represented in the surrounding half-edges.

Half-edge connectivity:

Half-edge

The image above illustrates the connectivity information related to a specific half-edge (red), which explicitly stores indices to

  • the opposite half-edge (also called pair or twin)
  • the next half-edge
  • the previous half-edge (this is actually not a requirement of the half-edge data structure but it is useful for the implementation)
  • the terminating vertex
  • the face it belongs to

Other elements in the mesh can be reached implicitly (gray)

  • originating vertex = opposite half-edge -> terminating vertex
  • opposite face = opposite half-edge -> face

By using a convention in the implementation even the opposite half-edge can be accessed implicitly in constant time and its storage space can be saved: If half-edges are always added in pairs then their respective indices come right after each other. Given a half-edge index we can then derive the opposite index simply by checking if it is even or odd.

Vertex connectivity:

Vertex

The image above illustrates the connectivity information related to a specific vertex (red), which explicitly stores an index to

  • one of its outgoing half-edges

The corresponding incoming half-edge can be reached implicitly (gray)

  • incoming half-edge = outgoing half-edge -> opposite half-edge

Although only one of the outgoing half-edges is stored the others can be reached by circulating around the vertex. Given any of the outgoing half-edges the next outgoing half-edge can be reached by

  • outgoing half-edge = outgoing half-edge -> previous half-edge -> opposite half-edge (counter-clockwise)
  • outgoing half-edge = outgoing half-edge -> opposite half-edge -> next half-edge (clockwise)

This procedure has to be continued until a full loop around the vertex has been completed. Similarly it is possible to use the incoming half-edge for the circulation

  • incoming half-edge = incoming half-edge -> opposite half-edge -> previous half-edge (counter-clockwise)
  • incoming half-edge = incoming half-edge -> next half-edge -> opposite half-edge (clockwise)

With a slight modification this allows us to access the vertices or faces as well (only shown in counter-clockwise order)

  1. vertex = outgoing half-edge -> terminating vertex
  2. outgoing half-edge = outgoing half-edge -> previous half-edge -> opposite half-edge

or

  1. face = outgoing half-edge -> face
  2. outgoing half-edge = outgoing half-edge -> previous half-edge -> opposite half-edge

This procedure is continued until a full loop is completed. Using these basic operations it is possible to find all neighbors around a vertex, which is also called the one-ring neighborhood.

Face connectivity

Vertex

The image above illustrates the connectivity information related to a specific face (red), which explicitly stores an index to

  • one of the inner half-edges

The corresponding outer half-edge can be reached implicitly (gray)

  • outer half-edge = inner half-edge -> opposite half-edge

As for the vertices the other inner half-edges can be reached by circulating around the face

  • inner half-edge = inner half-edge -> next half-edge (counter-clockwise)
  • inner half-edge = inner half-edge -> previous half-edge (clockwise)

Each outer half-edge or vertex is referenced from its corresponding inner half-edge.

Boundaries

Boundaries can be represented by an invalid face index in the half-edge. It is important that both half-edges are kept in the mesh because else it would be no longer possible to access all neighbors. A half-edge with an invalid face index is called boundary half-edge. Accordingly, a vertex that is connected to a boundary half-edge is called boundary vertex.

One can circulate around the boundary with

  • boundary half-edge = boundary half-edge -> next half-edge (clockwise)
  • boundary half-edge = boundary half-edge -> previous half-edge (counter-clockwise)

This is the same as circulating through the inner half-edges of a face with the only difference that the direction (clockwise, counter-clockwise) is reversed.

If the mesh has a boundary then one has to be very careful when accessing the faces through the half-edges in order to avoid dereferencing an invalid face index.

Manifoldness

I will talk about manifoldness in my next blog post.

Mesh basics
Wednesday, June 27, 2012
../_images/nvcs1.png

It’s been a while since my last blog post. I have been moving into a new home and started with a job. I hope I can find the right balance between this and the code sprint. Before starting with the half-edge data structure I would like to introduce a few mesh basics, just to make the terminology clear and consistent. Here are two links that should provide more in-depth information:

A polygon mesh is a data structure where the geometry is stored along with the topology (connectivity information). The points or nodes inside a mesh are called vertices and hold the geometry. An edge is a connection between two vertices. Three or more edges that form a closed loop are called face, i.e. a triangle, quadrilateral, pentagon, ..., polygon. The edges and faces define the topology. A mesh that consists only of triangles is called triangle-mesh, a mesh that consists only of quadrilaterals is called quad-mesh and so on.

The mesh has no boundary if each edge is connected to exactly two faces. If any edge is connected to only one face then the mesh has a boundary, shown in the next image (taken from the Geometry Processing Algorithms lecture, Basics).

Mesh with boundary

If any edge is connected to more than two faces (left) or if two or more faces are connected by only one vertex (middle, right) then the mesh becomes non-manifold.

Non-manifold configurations

Another property to consider when working with meshes is the orientation of the faces. If the faces in the mesh have a consistent orientation (all clockwise or all counter-clockwise) then the mesh is orientable (left). A non-orientable configuration is shown in the right. An example for a non-orientable surface is the Möbius strip.

Orientability
XML interface for people
Wednesday, June 20, 2012
../_images/nvcs.png

I’ve added an XML interface to load/save person specific configuration files. The generic XML file is also saved in trunk. This will allow people to tune the kinematic chain according to themselfs to improve on the tracking results. This XML interface is currently already in the process of beeing extended to an v0.2 which will also include the directed acyclic graph (DAG) description of the kinematic chain tree, allowing users to reconfigure this and place the root at a different position. The XML interface is now part of the PersonAttribs class and in the examples can be used with the -XML tag. For the summer I’ll have a student to help me with the training process as we now got access to the AWS cluster to enhance our training facilities. We will also be looking into training on GPU. If anybody has good knowledge about how to achieve this (RDF training in CUDA) and is willing to point this out to me or to help out, please send me an email!

Face detection in PCL
Tuesday, June 19, 2012
../_images/nvcs.png

I’ve added the FaceDetector class framework and committed it, in the back I’ve also started porting the NPP face detector implementation from Nvidia to PCL. However as there is still some uncertainty on the license of this. I’m currently waiting for a reply from Nvidia before I will commit it to trunk. This includes a GPU Viola Jones implementation from Anton Obukhov, “Haar Classifiers for Object Detection with CUDA” explained in the GPU Computing Gems book.

Gaussian kernel convolution
Sunday, June 17, 2012
../_images/nvcs.png

I’ve added a GPU Gaussian kernel convolution, this allows to add a system model to the human in order to have sensor fusion on the pixel labels. The prediction step will make use of the Gaussian uncertainty. I’ve also added the Gaussian kernel generation function.

Preview: Finger Tracking + Android 4.0 + Tegra 3 + Kinect + OpenNI + OpenCV
Monday, June 11, 2012
../_images/nvcs3.png

In the coming tutorial, we will show you how to implement the ‘well-known’ finger tracking algorithm on Kinect + Android + OpenNI + Tegra3 + OpenCV. The algorithm is indeed very simple and computationally efficient. We can achieve real-time performance easily on Tegra 3!

We will be using the range camera (depth map) to create a contour of our hand. Then, we will perform a simple polynomial approximate on the contour, and extract the convex hull (the peaks) as our finger tips. All of these operations can happen in real-time on the Tegra 3 @ 25fps! That’s pretty impressive.

The requirements are as follows:

1. Tegra 3 or Tegra 2 Android device.
2. Microsoft Kinect or Asus Xtion (not tested)
3. NVIDIA NDK + OpenNI binary (see my previous tutorials)
4. OpenCV binary (will be provided below)
5. and 10 mins of your time :)

First of all, the method provided here are simplified for real-time usage, and we can get better results if we have a better way of extracting the hand location and filtering the depth map (See Limitation section at the end). Right now, we are thresholding on a range, i.e., we assume that the hand of a person will be positioned closer to the range camera.

Depth Segmentation and Contour Extraction
Bounding Boxes
Simple Gesture recognition - finger and grasp detection
Snapshot
Tegra 3 Tablet + Microsoft Kinect + My custom 12V battery
Hand Tracking with 3D PointCloud Data rendering in real-time on Tegra3.
Source code:
PCL svn repository (in about two weeks)
What’s next?
Using the OpenNI hand tracking and create several interfaces. Right now we have the point cloud data visualization on Android controlled by the Kinect at 15-20fps (with about 2 million dots).
Label histograms finished and working great
Monday, June 11, 2012
../_images/nvcs.png

I’ve finished the probabilistic label histograms in the people library, these are now part of the RDF detector class and will be added to the future detector classes as well, this will allow for easy probabilistic merging of the different detectors in order to improve the tracking results. Downside is that calculating them actually takes a lot of time, so I will look into doing them in NPP in the future and reorganising them from AOS to SOA architecture. With the SOA beeing image oriented.

Exams and stuff
Sunday, June 10, 2012
../_images/nvcs.png

It’s been a while since I wrote a blogpost because I’m currently very occupied with TA and correcting exams for this. In the process we started the discussion how to redesign the PCL GPU sublibrary for the next releases of PCL. The goal is to remove PCL CUDA sublibrary by PCL2.x and keep only the PCL GPU sublibrary. I’m also thinking about a possible redesign of the DeviceMemory and DeviceArray structures that are currently beeing used. Feel free to contact me or Anatoly for pointers on this.

Hello NVCS
Saturday, June 09, 2012
../_images/nvcs1.png

Hi, this is my first blog post for NVCS so I would like to introduce myself shortly. My name is Martin Sälzle and I recently graduated at the Technische Universität München. I have been working with the Kinect for my diploma thesis on 3D in-hand scanning of small, texture-less objects. The work is based mainly on [Weise2009] with some personal flavors. I also concentrated on the implementation without performing an active loop closure because I first wanted to get the registration pipeline as robust as possible. A bundle adjustment would surely help in the future. Here is the scanning setup (left) and some results (right):

Kinect in-hand scanner & scanned objects

For an evaluation I downloaded the model of the Stanford Bunny (left) and 3D printed it at www.shapeways.com (middle). I scanned the real world object in again with the Kinect (right) and compared it to the original model.

Stanford Bunny

I registered the scanned model to the original model and colored the original model according to the scanning error (Euclidean distance between corresponding points). A histogram is shown in the right.

Evaluation

My first task is to integrate the code into PCL. I am currently implementing the half-edge data structure because I reached the limits of the face-vertex mesh I used in my thesis. In the next post I will talk about that in more detail. Due to licensing issues we can’t use CGAL or OpenMesh in PCL.

[Weise2009]Weise, Wismer, Leibe, Van Gool. In-hand scanning with online loop closure. In ICCV 2009, p.1630–1637.
Training pipeline
Thursday, May 31, 2012
../_images/nvcs.png

I finally found the time to start putting the training pipeline code public, still the Makefiles of the public version need to be adapted, and this will most likely have multiple changes and API breaks within the next weeks, but the code can be found here: http://svn.pointclouds.org/people/. There is also a change that this will move to trunk/people in the future once this is fully in the PCL namespace.

Training pipeline
Tuesday, May 29, 2012
../_images/nvcs.png

I had a great time presenting my work at ICRA, but still some small bug was present in the public version, from time to time the background was displayed as the correct kinematic chain, this is now solved in the current trunk version. This was because it was displaying without a check for the correct tree beeing build.

OpenNI + Android FAQ
Thursday, May 24, 2012
../_images/nvcs3.png

Recently, we have received several emails about the OpenNI and Android development. Here we will address some of the most frequently asked questions and some of the future directions about this project.

FAQ:

Q: What tablet have we used?

Ans: We’ve used the Tegra 2 and Tegra 3 Development Board provided by NVIDIA. http://developer.nvidia.com/tegra-development-kits. I can safely assume that as long as your Tablet is as capable as Tegra 2, the OpenNI library shall be able to grab the depth map image and color images at 640*480 resolution at full speed. However, I have experienced some hiccups with the OpenNI drivers, especially the color image gets corrupted sometimes.

Q: What are the minimum OS requirements?

Ans: We’ve tested everything from Android 3.0 to Android 4.x. However, I noticed that I have got better performance on Android 3.x OS? That’s something I’m currently working on.

Q: How could I know if my tablet work with the Kinect?

Ans: Try dmesg under ‘adb shell’. You should be able to see the Microsoft Kinect is recognized as a USB hub with camera, motor, and audio devices. If you cannot see any USB devices attached, I believe we need to look into that further.

Q: I’ve trouble getting the ‘multitouch’ sample code compiled?

Ans: We notice that the NDK package from NVIDIA http://developer.nvidia.com/tegra-resources. There are some dependencies that become missing in the latest SDK. I’m current working on this (basically rewriting part of the code to be conformed with the “Android Application Lifecycle in Practice: A Developer’s Guide”. LINK: http://developer.nvidia.com/sites/default/files/akamai/mobile/docs/android_lifecycle_app_note.pdf . Please check this blog again in the future for a more self-contained package.

Q: I’m getting errors when compiling the OpenNI library. What should I do?

Ans: We would highly recommend others to use the binary (as provided in the last tutorial) for the first time. I will provide a sample script for recompiling in the future.

Q: How do I push the .so libraries to the device?

Ans: Please read the push_lib.sh file in the package. We’ve provided a few comments on how to remount the /system directory for read-write.

Q: Who’s raymondlo84?

Ans: I’m a Ph.D student studying ECE at UofT. I’m currently working with PCL and NVIDIA’s summer code program and free feel to let us know what you think about the Android project.

Thank you.

Quick Tutorial: A simple C++ OpenNIWrapper for using OpenNI on Android Devices
Monday, May 14, 2012
../_images/nvcs3.png
Important: In this tutorial, we assume that you have already installed the OpenNI shared libraries as previously discussed to your device and so. If you haven’t done so, please follow our previous post and adb push the binary libraries to the proper location Additionally, you are required to perform
mount -o devmode=0666 -t usbfs none /proc/bus/usb

everytime after you rebooted your device. The example code provided below has been tested on the NVIDIA Tegra 3 dev board and Microsoft Kinect only. If you would like to use Xtion or other range sensors, you may need to compile the drivers, and make appropriate updates to the modules.xml file. For more information, see our previous post.

To get started, we will first introduce the simple c++ wrapper we have written for handling the OpenNI calls. In this example, we will be only handling the image buffers, the color image buffer (24 bits RGB image) and the depth image buffer (16 bits single channel). I believe we can also record audio with the Kinect (?), but we have not verify that yet. Here we have the header files from our OpenniWrapper.h.

Sample Usage

To get started, you can download the header and cpp file here
http://openvidia.svn.sourceforge.net/viewvc/openvidia/tegra_kinect/jni/OpenniWrapper.cpp?view=log http://openvidia.svn.sourceforge.net/viewvc/openvidia/tegra_kinect/jni/OpenniWrapper.h?view=log

Our wrapper consists of three main function calls, start(), release(), and WaitAndUpdate(). By default, the openni wrapper will initialize the depth map and rgb image and runs in a separate thread (i.e., non-blocking) after the start(). To obtain the depth map or rgb images, we simply call WaitAndUpdate() (blocking call) and then provide a pointer which stores our depth and rgb image. The OpenniWrapper does not provide any memory allocation and it is our responsibility to handle the malloc and free.

...
#include “OpenniWrapper.h”
...
OpenniWrapper *openni_driver;
...
int main(){
...

openni_driver = new OpenniWrapper();

...

//initialize the driver
if(!openni_driver->start()){
//something is wrong. see log files
return 1;
}

int width = openni_driver->getWidth();
int height = openni_driver->getHeight();
unsigned char *rgb_buffer = (unsigned char*)malloc(width*height*3*sizeof(unsigned char));
unsigned short *depth_buffer = (unsigned short*)malloc(width*height*sizeof(unsigned short));

while(run_me){
WaitAndUpdate(); //blocking call
openni_driver->getRGB(rgb_buffer);
openni_driver->getDepth(depth_buffer);
...
process_buffers(); //can be multithreaded
...
}

//release the resources
openni_driver->release();
free(rgb_buffer);
free(depth_buffer):
}
To compile the source with the OpenNI libraries, we need to include the OpenNI headers and the shared lib paths to the Android.mk file. Particularly, I have added
LOCAL_LDFLAGS +=
-Lopenni_kinect_include/libs

LOCAL_LDLIBS += -lOpenNI

LOCAL_C_INCLUDES += openni_kinect_include/include

You can find the openni_kinect_include/include directory from the SVN respository below. Or you can simply create that directory by copying the headers from the OpenNI source directly.

Performance
CPU usage and FPS:
PID PR CPU% S #THR VSS RSS PCY UID Name
12539 2 60% R 14 532164K 65048K fg app_39 com.nvidia.devtech.multi
I/Render Loop:(12539): Display loop 0.014692 (s) per frame –> over 60fps for rendering, the capture loop is threaded.

OpenNI Binary (Kinect Only)

http://www.eyetap.org/~raymondlo84/multitouch/openni_binary.tgz

Sample Source Code

http://openvidia.svn.sourceforge.net/viewvc/openvidia/tegra_kinect/

Point cloud streaming to mobile devices with real-time visualization
Wednesday, May 02, 2012
../_images/nvcs2.png

The following video demonstrates point cloud streaming to mobile devices with real-time visualization. The point clouds are streamed from a desktop server to mobile devices over wi-fi and 4G networks. Point clouds are captured using the PCL OpenNI grabber with a Microsoft Kinect sensor. Visualization on the mobile devices is performed inside KiwiViewer using the VES and Kiwi mobile visualization framework.

The rendering frame rate is decoupled from the streaming rate. In the video, the app streams roughly ten clouds per second. In a follow up blog post I’ll describe the implementation of the current code and list future development goals. In the meantime, you can reach me on the PCL developers mailing list or the VES mailing list. For more information about VES and Kiwi, the mobile visualization framework used in the demonstration please see the VES wiki page

point cloud streaming app on mobile phone
Porting OpenNI to Android 4.0 + Microsoft Kinect + Tegra 2 + Sample Code
Wednesday, April 25, 2012
../_images/nvcs3.png

In this tutorial, we will show you how to compile OpenNI shared libraries and install them on your Android devices. Also, we will show you how extract the depth map information from the Microsoft Kinect with the SensorKinect driver (note: the OpenNI framework does not come with any drivers! Instead it dynamically loads the modules in runtime!). Before we start, I assume that we have already installed the following packages. I have tested this setup on my Mac OSX 10.6.8, and I believe Linux users shall not have any problems replicating these. Any Windows users? :)

System Requirements:

First, let’s get the OpenNI sources from the Git repository

cd ~/NVPACK/android-ndk-r7b/sources
cd OpenNI/Platform/Android/jni
ndk-build

If everything goes well, we will see the following files in this directory.

ls ~/NVPACK/android-ndk-r7b/sources/OpenNI/Platform/Android/libs/armeabi-v7a/
Sample-SimpleRead libOpenNI.so libnimRecorder.so niReg
Sample-SimpleSkeleton libnimCodecs.so libusb.so
libOpenNI.jni.so libnimMockNodes.so niLicense

Now, we will compile the SensorKinect Driver.

export NDK_MODULE_PATH=$HOME/NVPACK/android-ndk-r7b/sources/OpenNI/Platform/Android/jni
mkdir ~/NVPACK/openni
cd ~/NVPACK/openni
cd SensorKinect
git checkout faf4994fceba82e6fbd3dad16f79e4399be0c184
cd Platform/Android/jni
ndk-build

Again, if everything goes well, the ndk-build will create another set of .so shared libraries files in this directory

ls ~/NVPACK/openni/SensorKinect/Platform/Android/libs/armeabi-v7a
libOpenNI.so libXnDDK.so libXnDeviceSensorV2.so libusb.so
libXnCore.so libXnDeviceFile.so libXnFormats.so

Finally, we are ready to push these libraries on the Android device. One problem is the /system directory is read-only. Therefore, we have to remount this directory first by

adb shell
mount -o remount rw /system
mkdir /data/ni

Then, we push the packages to the device with these commands (these are our previously compiled .so files!)

adb push libOpenNI.so /system/lib
adb push libOpenNI.jni.so /system/lib
adb push libXnCore.so /system/lib
adb push libXnDDK.so /system/lib
adb push libXnDeviceFile.so /system/lib
adb push libXnDeviceSensorV2.so /system/lib
adb push libXnFormats.so /system/lib
adb push libusb.so /system/lib
adb push libnimCodecs.so /system/lib
adb push libnimRecorder.so /system/lib
adb push libnimMockNodes.so /system/lib

In runtime, the OpenNI framework will look for these shared libraries. To inform the OpenNI which modules to load, we need to commit this .xml file to this directory. It is very very important that all these files are put in the properly directories!

adb push data/in/modules.xml /data/ni

Here is example modules.xml file we have used.

<Modules>
<Module path=”/system/lib/libnimMockNodes.so” />
<Module path=”/system/lib/libnimCodecs.so” />
<Module path=”/system/lib/libnimRecorder.so” />
<Module path=”/system/lib/libXnDeviceSensorV2.so” configDir=”/data/ni/” />
<Module path=”/system/lib/libXnDeviceFile.so” configDir=”/data/ni/” />
</Modules>

As we can see from the modules.xml, the XnDeviceSensorV2 will look for

vim ~/NVPACK/openni/SensorKinect/Data/GlobalDefaultsKinect.ini
#and we will need to set the flag UsbInterface to 1.
UsbInterface=1
#assume you are in that directory
adb push GlobalDefaultsKinect.ini /data/ni/

Lastly, we will copy the SamplesConfig.xml to the /data/ni/ directory as well. You can find this file from the sample code in OpenNI or from our svn repository.

adb push SampleConfig.xml /data/ni/

That’s it! Now, we have the latest OpenNI + Kinect Driver compiled and install on your Android! In the coming up tutorial, I explain how we can use these drivers with NDK with sample codes! and more tricks are coming up! (e.g., mount -o devmode=0666 -t usbfs none /proc/bus/usb) !!

If you would like to check if the modules are installed successfully, you can run trying niReg on your Android device.

adb push niReg /system/bin

and run

root@android:/ # niReg -l (under adb shell)

and you should see something similar to...

. . .
/system/lib/libXnDeviceSensorV2.so (compiled with OpenNI 1.5.2.23):
Device: PrimeSense/SensorV2/5.1.0.41
Depth: PrimeSense/SensorV2/5.1.0.41
Image: PrimeSense/SensorV2/5.1.0.41
IR: PrimeSense/SensorV2/5.1.0.41
Audio: PrimeSense/SensorV2/5.1.0.41
. . .

Hirotaka Niisato (http://www.hirotakaster.com/) has also provided excellent tutorials on how to compile OpenNI on Android! I would like to thank him for providing some of the basic scripts for compiling the libraries! ;). However, I’ve trouble running the sample code he has provided. Instead, I have ported the Sample-NiSimpleRead to run on Android instead. ;) Stay tuned!

Here is the complete script we have used: copy and paste these and that will get you started.

#adb shell
#mkdir /data/ni
#mount -o remount rw /system
#mount -o devmode=0666 -t usbfs none /proc/bus/usb

adb push system/lib/libOpenNI.so /system/lib
adb push system/lib/libOpenNI.jni.so /system/lib
adb push system/lib/libXnCore.so /system/lib
adb push system/lib/libXnDDK.so /system/lib
adb push system/lib/libXnDeviceFile.so /system/lib
adb push system/lib/libXnDeviceSensorV2.so /system/lib
adb push system/lib/libXnFormats.so /system/lib
adb push system/lib/libusb.so /system/lib
adb push system/lib/libnimCodecs.so /system/lib
adb push system/lib/libnimRecorder.so /system/lib
adb push system/lib/libnimMockNodes.so /system/lib
adb push modules.xml /data/ni/
adb push GlobalDefaultsKinect.ini /data/ni/
adb push SamplesConfig.xml /data/ni/
Data
Wednesday, April 25, 2012
../_images/nvcs.png

I’m recording a one person test dataset today which has Vicon measurement data in there as well. I’ll use the plug-in-gait model from Vicon to have something to compare the behavior of our algorithm against. As a first step this will just be a single person, in order to see how this data is best recorded and synchronised in order to have a more extensive experiment later on. In the meanwhile with help of Anatoly’s CUDA experience the peopleDetector now runs at a stable 6-8fps which can be considered adequate for online use. My hopes are that bringing in the prior fusion will only help speeding up this process.

Roadmap
Sunday, April 22, 2012
../_images/nvcs.png
This is an overview of things I will focus on the next week:
  • Add a probabilistic DeviceArray into the RDF CUDA code
  • Expand each tree vote into a more probabilistic 1/NR_TREES = 25% vote in this DeviceArray
  • Add a discretisation function (in a CUDA kernel) for this array, giving the final votes

This expansion allows the detector to do sensor fusion based on data that can be fetched from other detectors (like face-detector). A more theoretical explanation to this will follow.

First blog entry
Saturday, April 21, 2012
../_images/nvcs.png

This is my first blog post. Past two weeks I’ve been integrating the code for run-time RDF labeling into PCL together with Anatoly. The code is now arranged in the PeopleDetector class and all the CUDA code is converted to use the DeviceArray structures. For the Seeded Hue Segmentation (SHS) and Euclidean Labeled Clustering (ELEC) a custom function was implemented specific for our use case. For interested users: please keep on using the original version in trunk/segmentation and don’t use the one in gpu/people as that one is currently written specific.