Using Docker#
Oumi provides pre-built Docker images that include all necessary dependencies for training, evaluation, and inference. Using Docker eliminates installation complexity and ensures consistent environments across different systems.
Available Images#
Oumi Docker images are published to the GitHub Container Registry at: ghcr.io/oumi-ai/oumi
Platform Support#
Oumi Docker images support multiple architectures:
Platform |
Architecture |
GPU Support |
PyTorch Version |
Image Size |
|---|---|---|---|---|
AMD64 |
x86_64/Intel/AMD |
CUDA 12.8 |
PyTorch 2.8.0 with CUDA |
~14GB |
Supported Operating Systems:
✅ Linux (AMD64/ARM64) - Native containers
✅ macOS (Intel/Apple Silicon) - via Docker Desktop
✅ Windows (Intel/AMD) - via Docker Desktop + WSL2
Note
Mac Silicon Users: If you need GPU-compatible images for development or testing, use the --platform linux/amd64 flag when pulling or running images:
docker pull --platform linux/amd64 ghcr.io/oumi-ai/oumi:latest
docker run --platform linux/amd64 -it --rm ghcr.io/oumi-ai/oumi:latest bash
Quick Start#
Pull the Image#
docker pull ghcr.io/oumi-ai/oumi:latest
Verify Installation#
docker run --rm ghcr.io/oumi-ai/oumi:latest oumi --help
Interactive Shell#
Launch an interactive container to explore Oumi:
docker run -it --rm ghcr.io/oumi-ai/oumi:latest bash
Once inside, you can run any Oumi command:
oumi env # Check environment info
oumi --help # View available commands
Using NVIDIA GPUs#
docker run --gpus all -it --rm ghcr.io/oumi-ai/oumi:latest bash
The --gpus all flag makes all available GPUs accessible to the container.
Verify GPU Access#
Inside the container, verify GPU access:
nvidia-smi
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
Working with Data and Models#
To persist data, models, and outputs, mount volumes from your host machine into the container.
docker run -it --rm \
-v $(pwd)/data:/oumi_workdir/data \
-v $(pwd)/outputs:/oumi_workdir/outputs \
-v ~/.cache/huggingface:/home/oumi/.cache/huggingface \
ghcr.io/oumi-ai/oumi:latest bash
This command:
Mounts
./datafrom your host to/oumi_workdir/datain the containerMounts
./outputsfrom your host to/oumi_workdir/outputsin the containerThe HuggingFace cache mount (
~/.cache/huggingface) prevents re-downloading models each time you run a container.Any changes to these directories persist after the container exits
Building Custom Images#
If you need custom dependencies or configurations, you can build your own image:
Clone the Oumi repository:
git clone https://github.com/oumi-ai/oumi.git cd oumi
Modify the
Dockerfileas neededBuild the image:
docker build -t my-oumi:latest .
Use your custom image:
docker run -it --rm --gpus all my-oumi:latest bash
Environment Variables#
You can pass environment variables to configure Oumi behavior:
docker run -it --rm \
--gpus all \
-e WANDB_API_KEY=your_wandb_key \
-e HF_TOKEN=your_hf_token \
-e OUMI_LOG_LEVEL=DEBUG \
ghcr.io/oumi-ai/oumi:latest bash
Next Steps#
See the quickstart guide for training examples
Learn about training configuration
Explore evaluation and inference guides
Check out remote training for cloud deployments