We are looking for talented contributors that are willing to develop open source efficient compression mechanisms for organized and unorganized 3D point cloud data in two separate code sprint projects. In both cases, the ultimate goal is to achieve as much compression as possible, but we will require an analysis with respect to the de/compression speed that can be obtained. A requirement is to be able to process a few million points per second on a standard laptop. In addition we will analyze and compare different lossy vs lossless compression techniques.
The data sources are variate, from terrestrial to mobile and aerial point clouds. Besides XYZ coordinates we expect the datasets to contain an additional intensity and/or color per point. Each dataset will contain standard meta information, and in the case of lossy compression, we will need to specify certain error limits to be satisfied.
The organized data format consists of a series of lines (but may be fixed position terrestrial, mobile, aerial, etc). Different lines may have different points spacing and number of points, and a given position (ray) may have multiple returns. The image below shows the very large density of the cloud, thus making it look like a 3D picture from any angle.
PCL-LGCS will run for 3 months during Q1 2013. Potential candidates should submit the following information to firstname.lastname@example.org:
- a brief resume
- a list of existing PCL contributions (if any)
- a list of projects (emphasis on open source projects please) that they contributed to in the past
This project requires good C++ programming skills, knowledge of PCL internals and a basic understanding of laser sensors and compression algorithms.