5 min read

How to run interactive HPC workloads alongside batch jobs in a single workflow

January 13, 2026
How to run interactive HPC workloads alongside batch jobs in a single workflow

Table of contents

What are service endpoints in HPC workflow orchestration?Interactive access as part of workflow orchestrationCombine batch and interactive workloads in one workflowInternal service coordination for complex HPC workflowsReal-time simulation monitoring and visualizationPortable, reproducible workflows with interactive componentsWho benefits from interactive HPC workflows?Getting started with Fuzzball service endpointsSummary: Fuzzball service endpoints at a glance

Contributors

Jonathon Anderson

Subscribe to our newsletter

Subscribe

Fuzzball service endpoints bring interactive computing to HPC workflow orchestration—run Jupyter notebooks, visualize simulations in real time, and coordinate services as part of your computational workflows.


Running a Jupyter notebook on an HPC cluster has gotten easier over the years. Web portals like Open OnDemand give researchers browser-based access to interactive sessions without manual SSH tunneling. But there's still a gap: interactive computing and batch computing remain separate activities. Your notebook session is one thing; your scheduled workflow is another.

What if your interactive session could be part of the same workflow as your batch jobs? What if a database, an API server, and a visualization tool could all run together, coordinated automatically, in one portable workflow definition?

That's what Fuzzball service endpoints do. Fuzzball operates at the orchestration layer—managing how computations, containers, and data flows coordinate–and service endpoints extend that orchestration to include long-running services and interactive access as first-class workflow components.

What are service endpoints in HPC workflow orchestration?

Traditional HPC workflow orchestration handles batch jobs: step one runs, produces output, step two picks up that output and runs. Service endpoints add a different kind of component—long-running processes that stay up while–even after–other jobs execute and can communicate with those jobs or with users outside the workflow.

Internal services run alongside your compute jobs. A database, an API server, a parallel compute environment—whatever your workflow needs. Fuzzball handles the lifecycle: brings services up before dependent jobs start, keeps them running as long as needed, tears them down when they're no longer required.

External services give users interactive access to running computations. Launch a Jupyter notebook that connects to your workflow's data. Open a virtual desktop to visualize results as they compute. Access a debugging environment while your simulation runs.

Both modes work within a single workflow definition. The whole thing is portable—run it on-prem, run it in the cloud, run it anywhere Fuzzball runs.

Interactive access as part of workflow orchestration

The difference between a web portal and workflow orchestration matters here.

An HPC virtual desktop portal lets you launch an interactive session on cluster resources. That's valuable—it removes friction for users who need GUI access or notebook environments. But the session exists independently. It's not coordinated with your batch workflows. If you want your notebook to access intermediate results from a running simulation, you're managing that coordination yourself.

With Fuzzball service endpoints, interactive access becomes part of the workflow definition itself. Your Jupyter notebook isn't a separate session—it's a service endpoint in the same workflow as your batch jobs. It has access to the same data, the same environment, the same lifecycle management.

When the workflow runs, you click a button in the Fuzzball web interface and connect to your notebook. The orchestration layer handles the networking and access. When your workflow completes, the service endpoint shuts down with everything else.

Combine batch and interactive workloads in one workflow

Here's where service endpoints get interesting for HPC workflow orchestration.

Say you're running a week-long simulation. Traditional approach: submit the job, wait seven days, look at results, discover your parameters were off, submit again.

With service endpoints, you can add a visualization service that lets you inspect the simulation while it runs. Check in on day two. If things look wrong, you catch it early instead of burning five more days of compute.

Or consider a machine learning pipeline. Training is batch work—you want it scheduled efficiently across available resources. Inference is a service—you want it running and accessible. Most platforms handle one or the other well. With service endpoints, you can define both in the same workflow: train (or fine-tune) the model, then serve it, all coordinated automatically.

National laboratory teams use this pattern for parallel computing environments. A cluster of nodes runs as a service, exposing an API. Other workflow steps—or researchers connecting interactively—call into that API. The whole architecture lives in one workflow definition.

Join us January 28 at 11 AM PT for a live demo of Fuzzball service endpoints, a new capability that unifies training and inference in single, portable workflows.

Internal service coordination for complex HPC workflows

When your workflow needs services talking to each other, manual coordination gets painful fast.

Without service endpoints: make sure the database is running somewhere accessible outside the cluster, configure jobs to find it, handle authentication, manage the lifecycle yourself, hope everything stays in sync when jobs retry or fail.

With service endpoints: define the database as part of your workflow. Fuzzball knows it needs to be running before dependent jobs start. Fuzzball handles the networking so jobs can find each other. Fuzzball tears it down when no longer needed.

Your workflow definition captures the whole computation—the batch jobs, the services, the dependencies between them—in one portable artifact. No external infrastructure to manage. No manual coordination scripts.

Real-time simulation monitoring and visualization

Researchers running simulations increasingly want to see results as they develop, not just after completion.

Climate researchers running ensemble simulations can connect visualization services to watch how different initial conditions play out across model runs.

Drug discovery teams can monitor screening workflows and prioritize promising candidates before the full batch completes.

Financial analysts running Monte Carlo simulations can watch confidence intervals tighten and decide when results are converged enough, rather than waiting for arbitrary completion thresholds.

In all these cases, service endpoints let you add interactive visibility to existing batch workflows. You're not choosing between batch efficiency and interactive access—you get both.

Portable, reproducible workflows with interactive components

Everything about Fuzzball workflows—including service endpoints—stays portable and reproducible.

Develop a workflow with interactive Jupyter access in your test environment. Deploy the same workflow to production. The service endpoints work identically. No reconfiguration.

For research reproducibility, this matters. When you publish results, other researchers can run your exact workflow, including whatever interactive tools you used during development. The methodology is reproducible, not just the final computation.

Who benefits from interactive HPC workflows?

Service endpoints expand who can effectively use HPC resources.

Data scientists who work primarily in notebooks often structure their work around interactive exploration. Service endpoints let them keep that pattern while accessing cluster-scale compute as part of larger workflows.

Visualization researchers who need real-time rendering can now get that capability as part of a larger computational workflow, not as a separate system to coordinate.

Domain scientists who want to explore results as they emerge—rather than waiting for completion—can do so without managing separate interactive sessions.

HPC administrators benefit too. Instead of treating batch workflows and interactive access as separate concerns, they can offer a unified orchestration layer. Workflows that need both just work.

Getting started with Fuzzball service endpoints

Service endpoints are available in Fuzzball 3.1. If you're already running Fuzzball workflows, you can add services to existing definitions. If you're evaluating HPC workflow orchestration platforms, this capability shows what's possible when batch and interactive computing aren't treated as separate concerns.

The traditional split between batch HPC and interactive computing made sense when the two were genuinely different systems. Modern research doesn't fit that model. Service endpoints let you work the way you actually think about problems—not the way legacy infrastructure forced you to.


Want to see service endpoints in action? Contact CIQ for a demo, or explore the Fuzzball documentation to learn how interactive HPC workflows can fit your environment.


Summary: Fuzzball service endpoints at a glance

Capability What it does
Interactive services Launch notebooks and desktops as part of workflows
Internal services Run databases, APIs, and compute servers with automatic lifecycle management
Real-time monitoring Visualize and interact with running simulations
Batch + interactive Combine scheduled compute jobs with interactive services in one workflow
Portable workflows Same workflow definition runs on-prem or cloud

Built for Scale. Chosen by the World’s Best.

1.4M+

Rocky Linux instances

Being used world wide

90%

Of fortune 100 companies

Use CIQ supported technologies

250k

Avg. monthly downloads

Rocky Linux

Related posts

AI workflow orchestration: why separate platforms fail

AI workflow orchestration: why separate platforms fail

CIQ Fuzzball and Nvidia NIM for Voice-to-Text Processing

CIQ Fuzzball and Nvidia NIM for Voice-to-Text Processing

CIQ's Partnership with NVIDIA: Transforming Enterprise GPU Infrastructure

CIQ's Partnership with NVIDIA: Transforming Enterprise GPU Infrastructure

Fuzzball Federate: Unify Complex HPC and AI/ML Jobs Across Cloud and On-Prem Resources

Fuzzball Federate: Unify Complex HPC and AI/ML Jobs Across Cloud and On-Prem Resources