4 min read

Running Jupyter notebooks on HPC clusters without SSH tunneling

January 27, 2026
Running Jupyter notebooks on HPC clusters without SSH tunneling

Table of contents

Why Jupyter on HPC is harder than it should beOption 1: Web portals like Open OnDemandOption 2: JupyterHub on the clusterOption 3: Workflow-integrated notebooks with FuzzballWhat to consider when choosing an approachFuzzball Service Endpoints: more than notebook access

Subscribe to our newsletter

Subscribe

Every HPC center has the same support ticket: "How do I run Jupyter on the cluster?"

The answer usually involves SSH tunneling, port forwarding, and a series of steps that look something like this:

  1. SSH to the login node
  2. Request an interactive session through Slurm or PBS
  3. Note which compute node you landed on
  4. Start Jupyter with specific IP and port flags
  5. Open a new terminal on your local machine
  6. Set up an SSH tunnel to the compute node through the login node
  7. Open your browser and navigate to localhost with the right port and token

If your connection drops, start over. If you forget which node you're on, start over. If the port is already in use, pick a new one and start over.

There are better approaches.

Why Jupyter on HPC is harder than it should be

The core problem is architectural. Jupyter expects to run on the machine where your browser is. HPC clusters expect you to submit jobs and wait for results. These two models don't naturally fit together.

Compute nodes typically don't have public IP addresses. They sit behind login nodes and aren't directly reachable from your laptop. So you need to create a tunnel: a path from your browser through the login node to the compute node where Jupyter is actually running.

SSH tunneling works, but it's fragile; it requires users to understand networking concepts that have nothing to do with their research. And it creates a support burden: every new user needs to learn the ritual, and experienced users still hit edge cases.

Option 1: Web portals like Open OnDemand

Many HPC centers have deployed Open OnDemand or similar web portals. These handle the tunneling complexity behind the scenes, giving users a browser-based interface to launch interactive sessions.

You log into the portal, select "Jupyter" from a menu, specify resources, and wait for your session to start. When it's ready, you click a button and get a notebook in your browser. No SSH commands, no port forwarding.

This approach works well for standalone interactive sessions. The limitation is that the session exists independently from your batch workflows. If you want your notebook to access data from a running simulation, or if you want to launch a notebook as part of a larger computational pipeline, you're back to manual coordination.

Option 2: JupyterHub on the cluster

Some centers run JupyterHub, which provides multi-user notebook access with authentication and resource management. Users log in through a web interface, and JupyterHub spawns notebook servers on their behalf.

This centralizes the complexity. Administrators configure the spawner once, and users get a simpler experience. But like web portals, JupyterHub sessions are standalone. They're not integrated with workflow orchestration.

Option 3: Workflow-integrated notebooks with Fuzzball

A different approach treats notebook access as part of workflow orchestration rather than as a separate system.

In this model, you define a workflow that includes a Jupyter service. When the workflow runs, the orchestration platform launches Jupyter as one component among others. You connect through a web interface (no SSH tunnel required) and the notebook has access to the same data and environment as the rest of your workflow.

This matters when your notebook isn't standalone. If you're exploring intermediate results from a simulation, debugging a multi-step pipeline, or doing interactive analysis that feeds into subsequent batch processing, having the notebook inside the workflow simplifies everything.

Fuzzball Service Endpoints work this way. You add a service endpoint to your workflow definition, specifying that you want Jupyter (or any other interactive service) available. When the workflow runs, Fuzzball handles the networking and access. You click a button in the web UI and connect directly, with no tunneling or port management.

Want to see Fuzzball in action? Check out our webinar Sovereign AI and interactive HPC: unifying training, inference, and exploration in one workflow.

What to consider when choosing an approach

Who are your users?

If most users want standalone notebook sessions for exploratory work, a web portal or JupyterHub might be sufficient. If users need notebooks connected to larger computational workflows, look for orchestration-level solutions.

How important is workflow integration?

For researchers who run simulations and then analyze results in notebooks, having both in the same workflow eliminates manual data movement and coordination. The notebook sees the simulation output directly.

What's your support capacity?

SSH tunneling is technically functional but generates support tickets. Portals and orchestration platforms move that complexity away from end users. The question is whether you manage it centrally (portal/JupyterHub) or eliminate it through architecture (workflow-integrated services).

Do you need reproducibility?

When the notebook environment is defined as part of a workflow, it becomes reproducible. Other researchers can run the same workflow and get the same notebook setup. This matters for published research and collaborative projects.

Does your data need to stay on your infrastructure?

For organizations with data sovereignty requirements in regulated industries, government, or competitive R&D, workflow-integrated notebooks keep everything local. Your notebooks, data, and compute run where you control them, not routed through external services.

Fuzzball Service Endpoints: more than notebook access

The trend is toward offloading infrastructure complexity from researchers. They want to run a notebook on powerful hardware without learning networking. Every layer of abstraction that achieves this represents progress.

Workflow-integrated notebooks are the newest approach. Instead of treating interactive access as separate from batch computing, Fuzzball Service Endpoints unify both in a single orchestration model. Define your workflow once: batch jobs, interactive services, data dependencies—and Fuzzball handles the execution and connectivity.

This extends beyond Jupyter. Service endpoints support any interactive service: virtual desktops for visualization, debugging environments connected to running simulations, or custom web interfaces for domain-specific tools. The same mechanism that eliminates SSH tunneling for notebooks also enables real-time inspection of long-running jobs, interactive parameter adjustment mid-simulation, and browser-based access to results as they're generated.

For organizations with data sovereignty requirements, everything stays on infrastructure you control. No external services routing your data. No cloud dependencies for basic interactive access. The workflow runs where you need it—on-prem, in your cloud, or hybrid—with the same notebook connectivity regardless of location.

The infrastructure should support the research, not the other way around.

Looking for Jupyter access without SSH headaches? Fuzzball Service Endpoints let you add notebooks to workflows and connect through a browser. Learn how it works or request a demo.

Built for Scale. Chosen by the World’s Best.

1.4M+

Rocky Linux instances

Being used world wide

90%

Of fortune 100 companies

Use CIQ supported technologies

250k

Avg. monthly downloads

Rocky Linux

Related posts

AI workflow orchestration: why separate platforms fail

AI workflow orchestration: why separate platforms fail

CIQ Fuzzball and Nvidia NIM for Voice-to-Text Processing

CIQ Fuzzball and Nvidia NIM for Voice-to-Text Processing

CIQ's Partnership with NVIDIA: Transforming Enterprise GPU Infrastructure

CIQ's Partnership with NVIDIA: Transforming Enterprise GPU Infrastructure

Fuzzball now provisions compute on CoreWeave

Fuzzball now provisions compute on CoreWeave