Docker installation#
Autoware publishes prebuilt multi-arch (amd64, arm64) Docker images on GHCR. There are runtime images for trying things out, devel images for building locally, and CUDA variants for GPU workloads.
The full image catalog — twelve images organized as a build graph — is documented in the canonical Docker reference next to the Dockerfiles in the autoware repository. Bookmark these sections, since they track the implementation:
docker/README.md→ Image Graph — visual diagram of how the images depend on each other.docker/README.md→ Images — table describing each image and when to use it.docker/README.md→ Pull from GHCR — tag pattern (<stage>-<ros_distro>[-<date>|-<version>]) anddocker pullexamples for every variant.
The two images most users will care about:
ghcr.io/autowarefoundation/autoware:universe-cuda-jazzy— full Autoware runtime with NVIDIA CUDA, cuDNN, and TensorRT bundled.ghcr.io/autowarefoundation/autoware:universe-jazzy— full Autoware runtime, no GPU.
Replace jazzy with humble for ROS 2 Humble.
For the broader containerized-deployment story (deployment patterns, integrations, edge use cases), see Open AD Kit:
Info
Before proceeding, confirm and agree with the NVIDIA Deep Learning Container license. By pulling and using Autoware's CUDA images, you accept the terms and conditions of the license.
Prerequisites#
- Docker
- NVIDIA Container Toolkit (preferred)
- NVIDIA CUDA 12 compatible GPU Driver (preferred)
-
Clone
autowarefoundation/autowareand move to the directory.git clone https://github.com/autowarefoundation/autoware.git cd autoware -
Install Ansible and run the Docker setup playbook:
bash ansible/scripts/install-ansible.sh ansible-galaxy collection install -f -r ansible-galaxy-requirements.yaml ansible-playbook autoware.dev_env.install_dockerTo install without NVIDIA GPU support:
ansible-playbook autoware.dev_env.install_docker --skip-tags nvidiaTo download only the artifacts:
ansible-playbook autoware.dev_env.install_dev_env --tags artifacts
Info
GPU acceleration is required for some features such as object detection and traffic light detection/classification. For details of how to enable these features without a GPU, refer to the Running Autoware without CUDA.
Quick Start#
Launching the runtime container#
The runtime image runs ros2 launch autoware_launch autoware.launch.xml on container start. The canonical docker run invocation — with map and data volumes, X11 forwarding, CUDA passthrough, and a flag-by-flag rationale table — is in docker/README.md → Usage. The same section covers the no-GPU variant (drop the NVIDIA-related flags) and how to override the default CycloneDDS config when you need ROS 2 nodes to reach across hosts.
Pre-configured demo scenarios#
Rather than crafting your own docker run command, the docker/examples/demos/ folder ships ready-to-run Compose stacks. Each demo has its own README with prerequisites and run commands:
- planning-simulator — Planning simulator with the sample map, vehicle, and sensor kit. Three rendering paths via Compose overlays: software rendering by default,
docker-compose.dri.yamlfor Intel/AMD/Nouveau hosts,docker-compose.nvidia.yamlfor NVIDIA proprietary. - awsim — Bridges Autoware to the AWSIM Unity simulator over
network_mode: hostand launchese2e_simulator.launch.xml. Requires NVIDIA + the Container Toolkit. - scenario-simulator — Runs a
scenario_simulator_v2scenario against a live Autoware planning stack as two services that share a generated CycloneDDS config.
Running Autoware tutorials#
Inside the container, run the Autoware tutorials by following these links:
Deployment#
Open AD Kit provides different deployment options for Autoware, so that you can deploy Autoware on different platforms and scenarios easily. Refer to the Open AD Kit Documentation for more details.
Development#
For developing against Autoware (rather than just running it), the docker/examples/basic/ folder ships three Compose files — each a "drop me into a shell" container built on the universe-devel-* images, with ~/autoware_data (containing maps/ and ml_models/) and the autoware source tree mounted. Pick the flavor that matches your host:
| Host GPU / driver | Compose file |
|---|---|
| NVIDIA + proprietary driver | dev-nvidia.compose.yaml (recommended) |
| NVIDIA + Nouveau open driver | dev-dri.compose.yaml |
| Intel / AMD | dev-dri.compose.yaml |
| No GPU / headless | dev-cpu.compose.yaml (software rendering) |
From the autoware repo root:
xhost +local:docker
HOST_UID=$(id -u) HOST_GID=$(id -g) \
docker compose -f docker/examples/basic/dev-nvidia.compose.yaml run --rm autoware
docker/examples/basic/README.md goes deeper: how to verify you actually got hardware acceleration (glxinfo -B), why dev-dri silently falls back to software rendering on NVIDIA proprietary, and how to attach a second terminal to a running dev container.
How to set up a workspace#
-
Create the
srcdirectory and clone repositories into it.mkdir -p src vcs import src < repositories/autoware.reposIf you are an active developer, you may also want to pull the nightly repositories, which contain the latest updates:
vcs import src < repositories/autoware-nightly.repos⚠️ Note: The nightly repositories are unstable and may contain bugs. Use them with caution.
Optionally, you may also download the extra repositories that contain drivers for specific hardware, but they are not necessary for building and running Autoware:
vcs import src < repositories/extra-packages.repos⚠️ You might need to install the dependencies of the extra packages manually.
➡️ Check the readme of the extra packages for more information.
-
Update dependent ROS packages.
The dependencies of Autoware may have changed after the Docker image was created. In that case, you need to run the following commands to update the dependencies.
# Make sure all ros-$ROS_DISTRO-* packages are upgraded to their latest version sudo apt update && sudo apt upgrade rosdep update rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO -
Build the workspace.
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=ReleaseIf there is any build issue, refer to Troubleshooting.
Update the Workspace#
cd autoware
git pull
vcs import src < repositories/autoware.repos
# If you are using nightly repositories, also run the following command:
vcs import src < repositories/autoware-nightly.repos
vcs pull src
# Make sure all ros-$ROS_DISTRO-* packages are upgraded to their latest version
sudo apt update && sudo apt upgrade
rosdep update
rosdep install -y --from-paths src --ignore-src --rosdistro $ROS_DISTRO
It might be the case that dependencies imported via vcs import have been moved/removed.
Vcs2l does not currently handle those cases, so if builds fail after vcs import, cleaning
and re-importing all dependencies may be necessary:
rm -rf src/*
vcs import src < repositories/autoware.repos
# If you are using nightly repositories, import them as well.
vcs import src < repositories/autoware-nightly.repos
Using VS Code remote containers for development#
Using the Visual Studio Code with the Remote - Containers extension, you can develop Autoware in the containerized environment with ease.
Get the Visual Studio Code's Remote - Containers extension.
And reopen the workspace in the container by selecting Remote-Containers: Reopen in Container from the Command Palette (F1).
You can choose the universe-devel-jazzy or universe-devel-cuda-jazzy image to develop without or with CUDA support.
Building Docker images from scratch#
The build pipeline uses docker buildx bake driven by docker/docker-bake.hcl. Building any target beyond base requires the autoware source repositories checked out under src/:
cd autoware/
vcs import src < repositories/autoware.repos
docker buildx bake -f docker/docker-bake.hcl
That builds the default targets (universe and universe-cuda); dependencies in the image graph are resolved automatically. To target a specific stage (e.g. core-devel, base-cuda-runtime), pass it as an argument; to build for ROS 2 Humble, prefix with ROS_DISTRO=humble. See docker/README.md → Build locally for the full target list and multi-arch flow.
Pinned date- and release-tagged versions of every image are published on GHCR — use them when you need a fixed version of the image.