3D deep learning finds crucial applications nowadays in many domains, including robotics, autonomous driving, virtual reality, and medical diagnosis. The 3D data required for training is collected using 3D sensors such as LiDAR (Light Detection And Ranging) and Depth cameras (e.g. RGB-D cameras). With increased numbers of 3D sensors, collecting data for training and capturing data during deployment becomes easier. However, training data mostly requires annotated data. Annotating 3D data is time consuming, tedious, and needs skilled manpower.
3D Annotating Tools
The output of 3D sensors are mostly point clouds. Point clouds are 3D point sets unorderly arranged in a high-dimensional space. Annotating those point clouds with 3-dimensional bounding boxes are usually performed manually. In the recent past, some automatic labeling tools emerged to annotate data for autonomous driving. These tools use the fixed horizontal orientations of vehicles- they always stay on the road parallel to the road. Therefore, these automatic labeling tools convert the 3D cloud points data into 2D images. The vehicle height yields the third dimension to those 2D images.
However, these automatic labeling tools can not be applied to data belonging to domains other than autonomous driving. These tools offer high restrictions in the data format and give lesser convenience in functionality. To this end, German researchers, Christoph Sager from Technische Universität Dresden, Patrick Zschech from Friedrich-Alexander-Universität Erlangen-Nürnberg, and Niklas Kühl from Karlsruhe Institute of Technology have developed labelCloud, a domain-independent labeling tool for 3D object detection in 3D point clouds.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Betterments of labelCloud
labelCloud is superior to any of its competitors, such as 3D Bat, LATTE, and SAnE, in every aspect:
- labelCloud supports all colorless data formats such as *.bin, *.xyz, and *.xyzn.
- labelCloud supports all colored data formats such as *.ply, *.pcd, *.pts, and *.xyzrgb.
- It supports labeling in all orientations and rotations in x-, y-, and z-axes.
- The special advantage of labelCloud over its competitors is that it needs no pre-trained model or special configuration to perform annotation.
- It has minimal dependence on other libraries.
- It is intended for ease-of-use, quick annotation, high-quality annotations, learnability, scalability and adaptability for task-specific applications.
Architecture of labelCloud
labelCloud is written with Python in a modular design paradigm. It utilizes the Python libraries NumPy and Open3D for array calculations and cloud data processing, respectively. While labeling, labelCloud develops 3D bounding boxes over point clouds. Each bounding box is defined with 10 parameters in labelCloud: one for the object class and nine Degrees of Freedom – three for the object location (x,y,z), three for the dimensions (length, width, height), and three for the rotations (roll, pitch, yaw).
labelCloud offers a powerful GUI (Graphical User Interface) for visualizing the cloud points. It enables rotations, translations, selection and other processes using mouse movements and clicks, and keyboard presses. It incorporates the OpenGL library for quick and efficient visualizations. Further, it improves user interactions with text fields and buttons.
Three modules control the activities of the labelCloud tool:
- Point cloud manager
- Drawing manager
- Label manager
Point cloud manager takes care of importing point clouds from different formats and their manipulations. The drawing manager gives different annotation options and formats. The label manager is responsible for exporting the annotated bounding boxes and their classes in the required format.
Major requirements are Python 3.6+, NumPy, OpenGL, and Open3D.
Clone the source code to the local or virtual machine using the following command.
!git clone https://github.com/ch-sa/labelCloud.git
Change directory to the downloaded source files for further processing.
%cd /content/labelCloud/ !ls
Install the dependencies by reading the requirements.txt file.
!pip install -r requirements.txt
A portion of the output:
Run labelCloud on a sample point cloud data stored in
pointclouds directory. Users can opt for their own data in any supported format. An example point cloud data, ‘exemplary.ply’ is provided in the directory.
When the file is run, it opens the app as GUI with the help of the following codes.
import sys from PyQt5 import QtWidgets from control.controller import Controller from view.gui import GUI def get_main_app(): app = QtWidgets.QApplication(sys.argv) # Setup Model-View-Control structure control = Controller() view = GUI(control) # Install event filter to catch user interventions app.installEventFilter(view) # Start GUI view.show() return app, view def run(): app, _ = get_main_app() sys.exit(app.exec_())
Users can change
config.ini file contents based on the requirements. The default configurations are provided as follows:
[FILE] ; Source of point clouds POINTCLOUD_FOLDER = pointclouds/ ; Sink for label files LABEL_FOLDER = labels/
[POINTCLOUD] ; Drawing size for points in point cloud POINT_SIZE = 4 ; Point color for colorless point clouds (r,g,b) COLORLESS_COLOR = 0.9, 0.9, 0.9 ; Colerize colorless point clouds by height value COLORLESS_COLORIZE = True STD_TRANSLATION = 0.03 STD_ZOOM = 0.0025
[LABEL] LABEL_FORMAT = centroid_abs OBJECT_CLASSES = cart, box STD_OBJECT_CLASS = cart Z_ROTATION_ONLY = True EXPORT_PRECISION = 8 VIEWING_PRECISION = 3 MIN_BOUNDINGBOX_DIMENSION = 0.01 STD_BOUNDINGBOX_LENGTH = 0.75 STD_BOUNDINGBOX_WIDTH = 0.55 STD_BOUNDINGBOX_HEIGHT = 0.15 STD_TRANSLATION = 0.03 STD_ROTATION = 0.5 STD_SCALING = 0.03
[SETTINGS] BACKGROUND_COLOR = 100, 100, 100 SHOW_FLOOR = True SHOW_ORIENTATION = True
This article discussed the labelCloud Python tool, meant for annotating 3D point cloud data for object detection tasks. We have discussed its architecture, the superior qualities over other competing tools, and its code implementation. The research paper is yet to be published in an upcoming conference. The tool will find great usage among researchers and practitioners in the object detection domain with lightweight implementation and versatile support and flexibility.