4 min read

Ten years of Apptainer/Singularity: A look back at the big bang of HPC containers

April 14, 2026
Ten years of Apptainer/Singularity: A look back at the big bang of HPC containers

Table of contents

The problem that refused to stay quietThe big bang of HPC containersAn act of stewardshipWhat ten years revealsThe question Apptainer made it possible to askOne long view

Contributors

Gregory Kurtzer, CEO and Founder, CIQ

Subscribe to our newsletter

Subscribe

Previously published in HPC Wire, April 14, 2026. Read the article.

This year marks the ten-year anniversary of Singularity, the project that became Apptainer, and it is worth pausing to recognize what that decade actually meant.

I have been thinking a lot lately about origins—the moment a problem is so pervasive that the solution instantly transforms an entire ecosystem. A notable example is how containers were aggressively and pervasively adopted within research and academia.

Back in the mid 2010s, scientists were beginning to embrace containers (Docker) for their work, but high-performance computing (HPC) systems utilized a fundamentally incompatible architecture, leaving researchers without the portability they desperately needed. Recognizing that a solution built specifically for the rigorous constraints of HPC was essential, I set out to create one.

The resulting solution was immediate and universal. This wasn't gradual uptake; it was an incredible adoption of an urgently required solution that, within months, spread from zero to virtually the entire ecosystem of national labs and supercomputing centers. That is how containers came to HPC, and how everything that followed became possible.

Before we get to Apptainer, we have to go back to Singularity.

The problem that refused to stay quiet

High-performance computing had a portability problem. Researchers spent weeks configuring software environments locally that could not be reproduced on an HPC resource, much less portably or reproducibly between systems. Experiments could not be replicated. Collaboration stopped at the software boundary instead of the scientific one.

Docker made containers accessible by providing well understood interfaces as well as build and mobility APIs and standards for containers, but Docker was not built for HPC. It consisted of privileged daemons, root-owned runtimes, security models designed for enterprise workloads: none of it belonged on a shared research system where a single misconfiguration affects thousands of users. HPC administrators faced a binary choice. Open the door to Docker and accept the risk of everyone having root and circumventing the resource manager, or keep it closed and leave researchers to fight the same environment problems indefinitely.

I was working at Lawrence Berkeley National Laboratory in 2015 when I decided to see if I could prototype a solution. I built Singularity to solve the actual problem: define a software environment once, run it anywhere, without privilege escalation, without a daemon, and without asking a system administrator to trade security for portability.

The solution mattered to everybody.

The big bang of HPC containers

What happened next was not gradual adoption, it was immediate and global uptake. Researchers and system administrators at national laboratories, universities, and supercomputing centers found Singularity and understood immediately the pain point that it solved for them. Within months, Singularity ran on some of the most powerful systems in the world: national labs, TOP500 clusters, academic HPC centers serving thousands of researchers across every major scientific discipline.

The impact was concrete. Weeks spent on environment configuration became hours. Multi-institution collaborations that had stalled over software reproducibility found a path forward. Bioinformatics pipelines, molecular dynamics simulations, climate models, particle physics workflows, genomics analysis: all of it ran portably, reproducibly, securely, and at scale.

Singularity was not a container tool adapted for HPC. It was the first container tool built for HPC. That is why it spread the way it did.

An act of stewardship

As Singularity grew, I made a decision to offer the project to the Linux Foundation to provide it a permanent home to always be governed by the community it served. It was accepted and renamed to Apptainer.

The name change confused some people. It should not have. Renaming the project was an act of love for what it had become. I wanted Apptainer to outlast any single company, contributor, or business decision (which is why Rocky Linux is also not owned by a company, not even mine!). The Linux Foundation provided exactly what the project needed. Apptainer 1.0 shipped in 2022: mature, stable, community-governed, and built to last.

More recently, Apptainer joined the Linux Foundation's High Performance Software Foundation (HPSF), a broader effort to sustain the open source software stack that scientific computing depends on. The foundation under it keeps getting stronger.

What ten years reveals

Apptainer proved something the HPC community knew but could not always articulate. Scientific computing has requirements that legacy infrastructure was not designed to meet. Portability, security, and reproducibility were just the beginning. It was the view point that was needed to see what the future of high-performance computing looks like and the urgency for it.

The question Apptainer made it possible to ask

Containers solved portability and reproducibility, but this was not the whole problem.

As HPC workloads grew more complex and AI entered the picture, a new set of requirements emerged. New types of orchestration and meta-orchestration emerged for large-scale heterogeneous compute environments. Scheduling and management across distributed infrastructure, lowering the barrier for users, and scheduling resource consuming services (inferencing and Jupyter Notebooks) alongside compute and MPI jobs where data is a tier 1 resource requirement.

This is the problem we built Fuzzball to solve.

Fuzzball carries forward every core commitment Apptainer established: security by design, performance at scale, no compromise between the two. Where Apptainer made it possible to define an environment and run it anywhere, Fuzzball makes it possible to define an entire workflow and run it everywhere: on-premises, at national computing facilities, in the cloud, or across all of them simultaneously.

The researchers who built their science on Singularity understand exactly why this matters as do the system administrators that support them. The next advance in scientific and AI computing is not faster hardware alone. It is a modern computing architecture that provides users with the ability to orchestrate that hardware intelligently, at scale, without surrendering control over data, environment, or security.

42 is the answer to the ultimate question of life, the universe, and everything, as Fuzzball is the answer to the question that a decade of Apptainer made it possible to ask.

One long view

I have spent decades working on software that researchers depend on to do science that matters. Apptainer runs on the fastest machines in the world. The science it enabled spans every major scientific discipline. The governance model I chose for it means it will continue to run for the next decade and the one after that.

What started as a solution to a portability problem became the architecture for reproducible science. What started as a single tool became a community. What started as one answer is now the foundation for the next generation of innovation.

That is a good decade's work. The next one has already started.

Gregory Kurtzer is the CEO and Founder of CIQ and the original creator of Singularity and Apptainer. Join the Apptainer community at https://github.com/apptainer/apptainer

Built for scale. Chosen by the world’s best.

2.75M+

Rocky Linux instances

Being used world wide

90%

Of fortune 100 companies

Use CIQ supported technologies

250k

Avg. monthly downloads

Rocky Linux

Related posts

Container workflows in HPC: Integrating Apptainer with cluster management

Container workflows in HPC: Integrating Apptainer with cluster management

Ten years of Apptainer/Singularity: A look back at the big bang of HPC containers

Ten years of Apptainer/Singularity: A look back at the big bang of HPC containers

CIQ Chosen by Texas Tech University's High Performance Computing Center to Help Maximize Research Productivity with Modernized HPC Infrastructure

CIQ Chosen by Texas Tech University's High Performance Computing Center to Help Maximize Research Productivity with Modernized HPC Infrastructure