Skip to content

autoware_lidar_transfusion#

Purpose#

The autoware_lidar_transfusion package is used for 3D object detection based on lidar data (x, y, z, intensity).

Inner-workings / Algorithms#

The implementation bases on TransFusion [1] work. It uses TensorRT library for data process and network inference.

We trained the models using https://github.com/open-mmlab/mmdetection3d.

Inputs / Outputs#

Input#

Name Type Description
~/input/pointcloud sensor_msgs::msg::PointCloud2 Input pointcloud.

Output#

Name Type Description
~/output/objects autoware_perception_msgs::msg::DetectedObjects Detected objects.
debug/cyclic_time_ms tier4_debug_msgs::msg::Float64Stamped Cyclic time (ms).
debug/pipeline_latency_ms tier4_debug_msgs::msg::Float64Stamped Pipeline latency time (ms).
debug/processing_time/preprocess_ms tier4_debug_msgs::msg::Float64Stamped Preprocess (ms).
debug/processing_time/inference_ms tier4_debug_msgs::msg::Float64Stamped Inference time (ms).
debug/processing_time/postprocess_ms tier4_debug_msgs::msg::Float64Stamped Postprocess time (ms).
debug/processing_time/total_ms tier4_debug_msgs::msg::Float64Stamped Total processing time (ms).

Parameters#

TransFusion node#

Name Type Description Default Range
trt_precision string A precision of TensorRT engine. fp16 ['fp16', 'fp32']
cloud_capacity integer Capacity of the point cloud buffer (should be set to at least the maximum theoretical number of points). 2000000 ≥1
onnx_path string A path to ONNX model file. $(var model_path)/transfusion.onnx N/A
engine_path string A path to TensorRT engine file. $(var model_path)/transfusion.engine N/A
densification_num_past_frames integer A number of past frames to be considered as same input frame. 1 ≥0
densification_world_frame_id string A name of frame id where world coordinates system is defined with respect to. map N/A
circle_nms_dist_threshold float A distance threshold between detections in NMS. 0.5 ≥0.0
iou_nms_target_class_names array An array of class names to be target in NMS. ['CAR'] N/A
iou_nms_search_distance_2d float A maximum distance value to search the nearest objects. 10.0 ≥0.0
iou_nms_threshold float A threshold value of NMS using IoU score. 0.1 ≥0.0
≤1.0
yaw_norm_thresholds array A thresholds array of direction vectors norm, all of objects with vector norm less than this threshold are ignored. [0.3, 0.3, 0.3, 0.3, 0.0] N/A
score_threshold float A threshold value of confidence score, all of objects with score less than this threshold are ignored. 0.2 ≥0.0

TransFusion model#

Name Type Description Default Range
class_names array An array of class names will be predicted. ['CAR', 'TRUCK', 'BUS', 'BICYCLE', 'PEDESTRIAN'] N/A
voxels_num array A maximum number of voxels [min, opt, max]. [5000, 30000, 60000] N/A
point_cloud_range array An array of distance ranges of each class. [-76.8, -76.8, -3.0, 76.8, 76.8, 5.0] N/A
voxel_size array Voxels size [x, y, z]. [0.3, 0.3, 8.0] N/A
num_proposals integer Number of proposals at TransHead. 500 ≥1

Detection class remapper#

Name Type Description Default Range
allow_remapping_by_area_matrix array Whether to allow remapping of classes. The order of 8x8 matrix classes comes from ObjectClassification msg. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] N/A
min_area_matrix array Minimum area for specific class to consider class remapping. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.1, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] N/A
max_area_matrix array Maximum area for specific class to consider class remapping. [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 36.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 999.999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] N/A

The build_only option#

The autoware_lidar_transfusion node has build_only option to build the TensorRT engine file from the ONNX file. Although it is preferred to move all the ROS parameters in .param.yaml file in Autoware Universe, the build_only option is not moved to the .param.yaml file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:

ros2 launch autoware_lidar_transfusion lidar_transfusion.launch.xml build_only:=true

The log_level option#

The default logging severity level for autoware_lidar_transfusion is info. For debugging purposes, the developer may decrease severity level using log_level parameter:

ros2 launch autoware_lidar_transfusion lidar_transfusion.launch.xml log_level:=debug

Assumptions / Known limits#

This library operates on raw cloud data (bytes). It is assumed that the input pointcloud message has following format:

[
  sensor_msgs.msg.PointField(name='x', offset=0, datatype=7, count=1),
  sensor_msgs.msg.PointField(name='y', offset=4, datatype=7, count=1),
  sensor_msgs.msg.PointField(name='z', offset=8, datatype=7, count=1),
  sensor_msgs.msg.PointField(name='intensity', offset=12, datatype=2, count=1)
]

This input may consist of other fields as well - shown format is required minimum. For debug purposes, you can validate your pointcloud topic using simple command:

ros2 topic echo <input_topic> --field fields

Trained Models#

You can download the onnx format of trained models by clicking on the links below.

The model was trained in TIER IV's internal database (~11k lidar frames) for 50 epochs.

Changelog#

(Optional) Error detection and handling#

(Optional) Performance characterization#

[1] Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu and Chiew-Lan Tai. "TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers." arXiv preprint arXiv:2203.11496 (2022).

[2] https://github.com/wep21/CUDA-TransFusion

[3] https://github.com/open-mmlab/mmdetection3d

[4] https://github.com/open-mmlab/OpenPCDet

[5] https://www.nuscenes.org/nuscenes

(Optional) Future extensions / Unimplemented parts#