Skip to content

Open AD Kit: containerized workloads for Autoware#

Open AD Kit offers two types of Docker image to let you get started with Autoware quickly: devel and runtime.

  1. The devel image enables you to develop Autoware without setting up the local development environment.
  2. The runtime image contains only runtime executables and enables you to try out Autoware quickly.

Info

Before proceeding, confirm and agree with the NVIDIA Deep Learning Container license. By pulling and using the Autoware Open AD Kit images, you accept the terms and conditions of the license.

Prerequisites#

  • Docker
  • NVIDIA Container Toolkit (preferred)
  • NVIDIA CUDA 12 compatible GPU Driver (preferred)

The setup script will install all required dependencies with the setup script:

./setup-dev-env.sh -y docker

To install without NVIDIA GPU support:

./setup-dev-env.sh -y --no-nvidia docker

Info

GPU acceleration is required for some features such as object detection and traffic light detection/classification. For details of how to enable these features without a GPU, refer to the Running Autoware without CUDA.

Usage#

Runtime setup#

You can use run.sh to run the Autoware runtime container with the map data:

./docker/run.sh --map-path path_to_map_data

For more launch options, you can append a custom launch command instead of using the default launch command ros2 launch autoware_launch autoware.launch.xml:

./docker/run.sh --map-path path_to_map_data ros2 launch autoware_launch autoware.launch.xml map_path:=/autoware_map vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit

Info

You can use --no-nvidia to run without NVIDIA GPU support, and --headless to run without display that means no RViz visualization.

Run the Autoware tutorials#

Inside the container, you can run the Autoware tutorials by following these links:

Planning Simulation

Rosbag Replay Simulation.

Development setup#

./docker/run.sh --devel

Info

By default workspace mounted on the container will be current directory, you can change the workspace path by --workspace path_to_workspace. For development environments without NVIDIA GPU support use --no-nvidia.

How to set up a workspace#

  1. Create the src directory and clone repositories into it.

    mkdir src
    vcs import src < autoware.repos
    
  2. Update dependent ROS packages.

    The dependency of Autoware may change after the Docker image was created. In that case, you need to run the following commands to update the dependency.

    sudo apt update
    rosdep update
    rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO
    
  3. Build the workspace.

    colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release
    

    If there is any build issue, refer to Troubleshooting.

To Update the Workspace

cd autoware
git pull
vcs import src < autoware.repos
vcs pull src

Using VS Code remote containers for development#

Using the Visual Studio Code with the Remote - Containers extension, you can develop Autoware in the containerized environment with ease.

Get the Visual Studio Code's Remote - Containers extension. And reopen the workspace in the container by selecting Remote-Containers: Reopen in Container from the Command Palette (F1).

You can choose Autoware or Autoware-cuda image to develop with or without CUDA support.

Building Docker images from scratch#

If you want to build these images locally for development purposes, run the following command:

cd autoware/
./docker/build.sh

To build without CUDA, use the --no-cuda option:

./docker/build.sh --no-cuda

To build only development image, use the --devel-only option:

./docker/build.sh --devel-only

To specify the platform, use the --platform option:

./docker/build.sh --platform linux/amd64
./docker/build.sh --platform linux/arm64

Using Docker images other than latest#

There are also images versioned based on the date or release tag.
Use them when you need a fixed version of the image.

The list of versions can be found here.