5 min read

fuzzball run: Immediate access to Fuzzball compute

April 1, 2026
fuzzball run: Immediate access to Fuzzball compute

Table of contents

Run commands, scripts, or interactive shells from one flagRequest exactly the CPU, memory, and GPUs you needThree ways to get data into the job without a custom imageUse any container image with composable flagsSkip the submit-and-wait loop with --dry-runFrom exploration to production workflow without context-switching

Contributors

Jonathon Anderson

Subscribe to our newsletter

Subscribe

Before Fuzzball v3.2, running nvidia-smi on a GPU node required 14 lines of YAML (Yet Another Markup Language) and five separate CLI commands. Fuzzball workflows handle multi-stage pipelines with dependencies, resource requirements, volumes, and scheduling constraints in a declarative YAML file but not everything you need to do warrants a workflow file. Sometimes you want to run a single command on a GPU (graphics processing unit) node. Sometimes you want an interactive shell in a specific container environment to test something. Sometimes you're iterating on a script and you just need to run it again with different parameters.

Before v3.2, every interaction with Fuzzball compute went through the workflow machinery. Even running nvidia-smi on a GPU node meant writing a Fuzzfile:

version: v1
jobs:
  run:
    image:
      uri: docker://nvidia/cuda:12.0-base
    resource:
      cpu:
        cores: 1
      memory:
        size: 1GiB
      devices:
        "nvidia.com/gpu": 1
    policy:
      timeout:
        execute: 8h
    command: ["/usr/bin/nvidia-smi"]

Then validating, submitting, polling for status, pulling logs, and cleaning up:

fuzzball workflow validate nvidia-smi.yaml
fuzzball workflow start nvidia-smi.yaml -n my-gpu-test
fuzzball workflow describe <id> -w
fuzzball workflow log <id> run -f
fuzzball workflow stop <id>

fuzzball run collapses all of that into one line:

$ fuzzball run --device nvidia.com/gpu=1 --image docker://nvidia/cuda:12.0-base -- /usr/bin/nvidia-smi

It generates the workflow YAML, submits it, streams the output, and cleans up when the job finishes: one command, same result.

Run commands, scripts, or interactive shells from one flag

fuzzball run works in three ways depending on what you give it.

Interactive shell. With no arguments, it drops you into a shell on a Fuzzball-managed node:

$ fuzzball run
Workflow "7fa8bd3b-5ca6-4671-ab22-ad8e8ab96f08" started.
Waiting for job to start...
(fuzzball:/)$

The default image is Alpine, but you can specify anything. If you want a bash session in a Rocky Linux environment:

$ fuzzball run --image docker://rockylinux:9 --tty -- /bin/bash

When you exit the shell, the workflow stops and resources are released.

Direct command execution. Arguments after -- become the command:

$ fuzzball run --image docker://python:3.11-slim --cores 8 --file ./train.py:train.py -- /usr/local/bin/python train.py arg1 arg2

Output streams to your terminal in real time, and the workflow cleans up when the command exits.

Script execution. The --script flag copies a local script into the job and runs it:

$ fuzzball run --cores 8 --device nvidia.com/gpu=1 --script ./build.sh

If the script has a shebang line, it's respected, so a #!/usr/bin/env python3 script runs under Python, not sh. If there's no shebang, /bin/sh is used as the default.

Request exactly the CPU, memory, and GPUs you need

By default, fuzzball run allocates 1 CPU core and 1 GiB (gibibyte) of memory. Override with --cores and --mem:

$ fuzzball run --image docker://gcc:14 --cores 16 --mem 32GiB -- /usr/bin/make -j16

Device requests use --device with a <device-id>=<count> format. Request a GPU:

$ fuzzball run --device nvidia.com/gpu=1 --image docker://nvidia/cuda:12.0-base -- /usr/bin/nvidia-smi

Request multiple devices; the flag is repeatable:

$ fuzzball run --device nvidia.com/gpu=2 --device amd.com/gpu=1 --script ./multi-device-test

For federated Fuzzball deployments, --cluster-id routes the job to a specific cluster.

Three ways to get data into the job without a custom image

File injection, volumes, and stdin each solve a different problem.

File injection copies a local file into the job at a specified path. This is the fastest way to get a script or config file into an environment without building a custom image:

$ fuzzball run --image docker://python:3.11-slim \
    --file ./train.py:train.py \
    --file ./config.json:/etc/config.json \
    -- /usr/local/bin/python train.py

The files are read locally and embedded inline in the generated workflow; no external storage needed.

Volumes mount persistent Fuzzball storage into the job. If your training data lives in a Fuzzball volume, mount it directly:

$ fuzzball run --image docker://python:3.11-slim --volume user/datasets:/data --file ./train.py:train.py --cores 8 -- /usr/local/bin/python train.py --data-dir /data

Or mount a workspace and get an interactive session with your code and data available:

$ fuzzball run --volume user/nfs/code:/workspace --env HOME=/workspace --tty

Already managing data in Fuzzball volumes? See the Fuzzball documentation for volume configuration and access patterns.

Stdin piping is detected automatically. If you pipe data into fuzzball run, it's injected into the job and piped to the command:

$ cat data.txt | fuzzball run -- /usr/bin/wc -l
$ echo "hello world" | fuzzball run -- /usr/bin/tr '[:lower:]' '[:upper:]'

Use any container image with composable flags

Set environment variables with --env:

$ fuzzball run --image docker://python:3.11-slim --env WORKERS=4 --env CUDA_VISIBLE_DEVICES=0 --file ./train.py:train.py -- /usr/local/bin/python train.py

The --image flag accepts any container image URI. The default is docker://docker.io/alpine:latest, but in practice you'll often specify the image you need:

$ fuzzball run --image docker://nvcr.io/nvidia/pytorch:24.01-py3 \
    --device nvidia.com/gpu=1 \
    --file ./model.py:model.py \
    -- /usr/bin/python model.py

These flags compose naturally. A realistic iterative development session might look like: mount your data volume, inject the script you're working on, request a GPU, and run it, all in one command. When it finishes or you kill it, everything cleans up.

Skip the submit-and-wait loop with --dry-run

If you want to see the workflow YAML that fuzzball run would generate without actually submitting it, use --dry-run:

$ fuzzball run --dry-run --image docker://python:3.11-slim --cores 8 --device nvidia.com/gpu=1 --file ./train.py:train.py -- /usr/local/bin/python train.py

This prints the generated Fuzzfile to stdout. It's the same workflow you'd have written by hand; fuzzball run builds it from flags. Use --dry-run to understand exactly what's being submitted, or as a starting point when a one-liner graduates to a proper workflow file: run it with --dry-run, save the output, and customize from there.

From exploration to production workflow without context-switching

The place where fuzzball run has changed how I work with Fuzzball most is in workflow development. Building a new workflow usually involves a lot of "does this image have the right libraries," "can the job see this volume," "what does the GPU topology look like on these nodes" — questions that are faster to answer interactively than by writing a workflow, submitting it, waiting, and reading logs.

With fuzzball run, the loop is tight. Get a shell in the image you're planning to use:

$ fuzzball run --image docker://nvcr.io/nvidia/pytorch:24.01-py3 --device nvidia.com/gpu=1 --tty -- /bin/bash

Poke around. Check what's installed, verify the CUDA version, test your mount paths. When you're satisfied, inject your script and run it:

$ fuzzball run --image docker://nvcr.io/nvidia/pytorch:24.01-py3 \
    --device nvidia.com/gpu=1 \
    --file ./train.py:train.py \
    --volume user/datasets:/data \
    -- /usr/bin/python train.py --data-dir /data --epochs 1

If it works, add --dry-run to get the generated workflow YAML, save it, and build your full workflow from there. The path from "exploring" to "production workflow" doesn't require context-switching between interactive testing and YAML authoring until you're ready.

fuzzball run is available in Fuzzball v3.2 and later.

Built for Scale. Chosen by the World’s Best.

1.4M+

Rocky Linux instances

Being used world wide

90%

Of fortune 100 companies

Use CIQ supported technologies

250k

Avg. monthly downloads

Rocky Linux

Related posts

AI workflow orchestration: why separate platforms fail

AI workflow orchestration: why separate platforms fail

CIQ Fuzzball and Nvidia NIM for Voice-to-Text Processing

CIQ Fuzzball and Nvidia NIM for Voice-to-Text Processing

CIQ's Partnership with NVIDIA: Transforming Enterprise GPU Infrastructure

CIQ's Partnership with NVIDIA: Transforming Enterprise GPU Infrastructure

Fuzzball adds preview support for CoreWeave provisioning

Fuzzball adds preview support for CoreWeave provisioning