Security Features of Apptainer vs. Rootless Podman: Part 3

Security Features of Apptainer vs. Rootless Podman: Part 3
Dave GodloveOctober 23, 2023

This is the final blog post in a 3-part series that compares and contrasts the security features of Apptainer and Rootless Podman. Part 3 focuses on signed containers, encrypted containers, and miscellaneous topics.

At this point in our container security miniseries, we’ve provided some background on Apptainer and Podman, and we’ve done a pretty deep dive into User Namespaces and their implicit and explicit use by these container platforms. Now it’s time for us to cover a few more security-related features and miscellaneous topics.

Cryptographically signing and verifying containers

In the container universe, we often run containers created by other people from public registries like Docker Hub, and we typically (almost always?) build our own containers using someone else’s container as a starting point.

So how do you know that the software in these containers is in good shape? Do you know and trust the author(s) of the containers that you are using? And how do you know that the containers you are using have not been altered in some way (either accidentally or maliciously) since the original author(s) pushed them up to Docker Hub (or whatever registry you are using)? Furthermore, how do you know that the container on the remote registry remains unaltered after you download it? In other words, that there are no errors in the copy or that you don’t have a “man-in-the-middle“ attack inserting malicious code into your previously trusted container during the download?

All of these concerns fall under the heading of software supply chain. One way you can help secure your software supply chain is to use a sign/verify workflow. The basic idea is this. Container authors cryptographically sign their containers using private key material. In addition to the private key material, the signing process relies on the container contents, so the container can’t be altered without voiding the signature. When you consume a container, you use public key material to verify the signature, thereby validating 1) the identity of the container author(s) and 2) the integrity of the container itself.

As we go through the details below, it is important to note that anybody can sign a container. So it is not enough to simply check whether a container is signed or not. You need to manually compare the fingerprint to that obtained from the author to verify it was signed by an author that you know and trust.

Apptainer: containers are signed using SIF and verified via a key store

The signing and verification workflow in Apptainer looks like this:

  • A container author generates a new keypair, builds or obtains a container, and uses the sign command to add a cryptographic signature to the SIF file.

  • The container author (optionally) pushes the public key material to Based on the email address that they added to the key, they will receive a message from the website asking them to verify that they pushed the key. Completing this step makes the key material publicly downloadable so anyone can use it to verify a container.

  • The container author (optionally) pushes the container to some registry (using the ORAS protocol if the registry is OCI-based).

  • If the author pushed the public key material to the open key store, a downstream user who obtains the container can use the verify command to check that the SIF file signature is valid and to display the fingerprint showing who signed the container.

  • The final user can compare to a fingerprint obtained from the container author (out of band) to make sure that the container is not only signed, but signed by the expected author.

The first several steps of this workflow are illustrated below:

[demouser@demobox ~]$ apptainer key newpair
Enter your name (e.g., John Doe) : Demo User
Enter your email address (e.g., :
Enter optional comment (e.g., development keys) : this is a pretend key
Enter a passphrase : 
Retype your passphrase : 
Generating Entity and OpenPGP Key Pair... done

[demouser@demobox ~]$ apptainer key list
Public key listing (/home/demouser/.apptainer/keys/pgp-public):

0) U: Demo User (this is a pretend key) <>
   C: 2023-09-14 11:57:29 -0600 MDT
   F: 9C130BDDF2A72D535A3D228794E474706996FA82
   L: 4096

[demouser@demobox ~]$ apptainer sign demo.sif 
INFO:    Signing image with PGP key material
Enter key passphrase : 
INFO:    Signature created and applied to image 'demo.sif'

[demouser@demobox ~]$ apptainer key push 9C130BDDF2A72D535A3D228794E474706996FA82
INFO:    Key server response: Upload successful. This is a new key, a welcome email has been sent.
public key `9C130BDDF2A72D535A3D228794E474706996FA82' pushed to server successfully

So here our demouser created a new key pair, used the list command to show that the key exists, signed a container with the new key, and pushed the public key material to At this point, the key store has sent an email to (which of course is not a real email). In a real situation, the user would check their email and follow the steps to verify that they pushed the key to the key store.

The remaining steps can also be demonstrated by our demouser with the help of a container that I signed and pushed to Docker Hub some time ago.

[demouser@demobox ~]$ apptainer pull signed-rocky.sif oras://
INFO:    Downloading oras image

[demouser@demobox ~]$ ls -lh signed-rocky.sif 
-rwxr-xr-x. 1 demouser demouser 103M Sep 14 12:23 signed-rocky.sif

[demouser@demobox ~]$ apptainer verify signed-rocky.sif 
INFO:    Verifying image with PGP key material
[REMOTE]  Signing entity: David Godlove (production key) <>
[REMOTE]  Fingerprint: B7761495F83E6BF7686CA5F0C1A7D02200787921
Objects verified:
1   |1       |NONE    |Def.FILE
2   |1       |NONE    |JSON.Generic
3   |1       |NONE    |FS
INFO:    Verified signature(s) from image 'signed-rocky.sif'

The demouser does not have the public key material saved locally, but that’s OK because Apptainer can get it automatically from the default key store. Note that the fingerprint is B7761495F83E6BF7686CA5F0C1A7D02200787921. Now demouser has to 1) go look at Docker Hub where I’ve posted the fingerprint and verify that they match up, and 2) use their best judgment to decide whether or not they trust this squirrelly David Godlove character. 😈

It’s also possible to bake this verification workflow directly into your container-building process by using the Fingerprints keyword in the definition file. Observe:

[demouser@demobox ~]$ cat build-from-signed.def 
Bootstrap: oras
Fingerprints: B7761495F83E6BF7686CA5F0C1A7D02200787921

    echo "only get here if the fingerprint matches"

[demouser@demobox ~]$ apptainer build build-from-signed.sif build-from-signed.def 
WARNING: 'nodev' mount option set on /tmp, it could be a source of failure during build process
INFO:    Starting build...
INFO:    Using cached SIF image
INFO:    Checking bootstrap image verifies with fingerprint(s): [B7761495F83E6BF7686CA5F0C1A7D02200787921]
INFO:    Running post scriptlet
+ echo 'only get here if the fingerprint matches'
only get here if the fingerprint matches
INFO:    Creating SIF file...
INFO:    Build complete: build-from-signed.sif

Of course, if the fingerprint doesn’t match, Apptainer won’t build the container. Watch what happens when I change the last several characters of the fingerprint to x and try to rebuild.

[demouser@demobox ~]$ apptainer build build-from-signed.sif build-from-signed.def 
Build target 'build-from-signed.sif' already exists and will be deleted during the build process. Do you want to continue? [N/y] y
WARNING: 'nodev' mount option set on /tmp, it could be a source of failure during build process
INFO:    Starting build...
INFO:    Using cached SIF image
INFO:    Checking bootstrap image verifies with fingerprint(s): [B7761495F83E6BF7686CA5F0C1A7D022xxxxxxxx]
FATAL:   While performing build: conveyor failed to get: while checking fingerprint: image not signed by required entities

Note that the keyword Fingerprints in the def file above is plural. This is not a mistake. A single SIF file can have a bunch of different signatures associated with it and you can check one, several, or all of them using this workflow.

The examples above verify on a per-container basis. You can also verify containers at the system level using an Execution Control List (ECL). I won’t replicate the documentation here, but the basic idea is that you stick a list of fingerprints in a configuration file and configure Apptainer so that it will reject any container that does not have signatures with all of the fingerprints. As discussed in the previous post, this feature cannot be used in conjunction with the User Namespace because User Namespaces allow unprivileged users to install their own version of Apptainer that can defeat the global ECL configuration.

Podman: signatures are stored and checked using registries

The basic idea and the benefits of signing and verifying containers with Podman are the same as those detailed above. In a nutshell, it can help you secure your software supply chain. But the workflow and underlying implementation are a lot different from that of Apptainer. There are a few different tutorials out there for Podman signing and verifying.

First, the tooling is different. The basic Apptainer installation provides all the necessary bits and pieces for you to sign and verify containers. Podman relies on some external tooling to get this job done. Depending on whether you are using the GPG workflow or the sigstore workflow (Podman >=v4.2), you may need to set up a separate server (a “lookaside“ server) to manage signatures. Or you may need to install tooling like skopeo (which is part of the container-tools module) to create keys.

Second, the signature storing is different. In the case of Apptainer, we just stick one or more signatures into the SIF file itself, and wherever the container goes, the signature(s) go(es) with it. But Podman uses the OCI container specification, which states that a container consists of a bundle of tarballs along with a manifest that are used to create the container at runtime. How do you store a signature in this model? To solve this problem, Podman uses either a separate lookaside server to store GPG keys, or the OCI registry itself to store sigstore signatures. Once it’s configured, the signing process is part of the push to a registry.

Third, public key material is not kept in a public key store. Instead, the user must ensure that the public key material for any signed container is saved locally on the machine where the container will run.

Fourth, you must configure your target system to check for the signature on containers before it runs them. So you can’t really check signatures on a per-container basis like you can with Apptainer. Everything is configured at the system level for all containers. You also can’t configure your system to check for multiple containers (or at least I could not do so using the sigstore method). So it seems like Podman signature checking implements pretty much the same functionality as Apptainer ECL confined to a single fingerprint.

Here is a quick demo of signing and verifying a container using the sigstore method. You can get more info at the links above.

[demouser@demobox ~]$ skopeo generate-sigstore-key --output-prefix myKey
Passphrase for key myKey.private: 
Key written to "myKey.private" and ""

[demouser@demobox ~]$ ll myKey.*
-rw-------. 1 demouser demouser 649 Sep 15 12:10 myKey.private
-rw-r--r--. 1 demouser demouser 178 Sep 15 12:10

[demouser@demobox ~]$ tail -n 3 /etc/containers/registries.d/default.yaml 
        use-sigstore-attachments: true

[demouser@demobox ~]$ podman tag alpine localhost:5000/alpine

demouser@demobox ~]$ podman push --sign-by-sigstore-private-key=./myKey.private \
    --tls-verify=false localhost:5000/alpine
Key Passphrase: 
Getting image source signatures
Copying blob 4693057ce236 done  
Copying config 7e01a0d0a1 done  
Writing manifest to image destination
Creating signature: Signing image using a sigstore signature
Storing signatures

Note the --sign-by-sigstore-private-key option that is part of the push command. Using the sigstore workflow we can just store the signature in the OCI registry instead of setting up a separate lookaside server.

Now to use the signature, we need to provide some additional configuration on the system.

[demouser@demobox ~]$ grep -A 5 localhost ~/.config/containers/policy.json 
            "localhost:5000": [
            "type": "sigstoreSigned",
            "keyPath": "/home/demouser/"

Now we can remove the alpine image and re-pull it.

[demouser@demobox ~]$ podman rmi localhost:5000/alpine:latest 
Untagged: localhost:5000/alpine:latest
Deleted: 7e01a0d0a1dcd9e539f8e9bbd80106d59efbdf97293b3d38f5d7a34501526cdb

[demouser@demobox ~]$ podman pull --tls-verify=false localhost:5000/alpine
Trying to pull localhost:5000/alpine:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob 97d7b294855e skipped: already exists  
Copying config 7e01a0d0a1 done  
Writing manifest to image destination
Storing signatures

Note that if we change the path to the public key to /home/demouser/bogus in the policy.json and try to remove the the container and download it again, we get the following:

[demouser@demobox ~]$ podman rmi localhost:5000/alpine:latest 
Untagged: localhost:5000/alpine:latest
Deleted: 7e01a0d0a1dcd9e539f8e9bbd80106d59efbdf97293b3d38f5d7a34501526cdb

[demouser@demobox ~]$ podman pull --tls-verify=false localhost:5000/alpine
Trying to pull localhost:5000/alpine:latest...
Error: Source image rejected: open /home/demouser/bogus: no such file or directory

And, of course, if we generate another key, update the policy.json, and try again, it fails as expected.

[demouser@demobox ~]$ podman pull --tls-verify=false localhost:5000/alpine
Trying to pull localhost:5000/alpine:latest...
Error: Source image rejected: cryptographic signature verification failed: invalid signature when validating ASN.1 encoded signature

Encrypted containers

Issues around software supply chain can be complicated/nuanced and the benefits of signing and verifying containers may be difficult for new users to grasp. In contrast, the benefits of encrypting your containers is pretty straightforward. Once you encrypt a container, it is no longer possible for others to see its contents without providing the appropriate secret. This means you can build a container that has secret tokens, passwords, proprietary software or data, etc., and you can store the containers publicly and move/run them without worrying.

It should be noted that container encryption does not completely solve the issue of keeping data secret within containers. The container is ultimately decrypted (on disk using Podman, or in memory using Apptainer) when it is run, and other users (especially those with admin rights) will be able to see the contents at some point during the container lifecycle. This will be covered in more detail in the sections below.

Sometimes people confuse container encryption with the cryptographic signing and verification procedure detailed in the previous section. I find that it is useful to remember the following distinction: Cryptographic signing is intended to be carried out by a single entity and verification is intended to be done by anyone. So you sign the container with the private half of the key and verification happens with the public material. Container encryption proceeds in the opposite direction. You encrypt the container with the public key material, but only one user should be able to access the container contents via the private key material.

Apptainer: the SIF file itself is encrypted

As with container signing, Apptainer takes advantage of the SIF file format to implement encryption. Once you have generated public and private key material in PEM file format, you can use the public key material to encrypt the image by passing the --pem-path option to the build command. Because Apptainer allows you to build containers directly from other containers without needing a definition file, you can simply use the following syntax to encrypt an unencrypted container.

[demouser@demobox ~]$ apptainer build --pem-path directory/rsa_pub.pem \
    encrypted.sif unencrypted.sif

Of course, you can encrypt containers as you build them from definition files or by building them from URIs (like docker:// and oras://) too.

When you are ready to run your encrypted container, the --pem-path option is reused with the shell, run, exec, or instance command in combination with the private key material to decrypt the container file system into memory in a new Mount Namespace.

Because Apptainer uses the SIF file format to encrypt containers, the entire operation is pretty simple. You are just encrypting a file. You can push and pull the encrypted container to an OCI registry like Docker Hub using the ORAS protocol. This is just uploading/downloading an encrypted file from a cloud storage location. The container is never decrypted on disk, so it can be stored on a file system in shared space. However, a decrypted copy of the file system does exist in a new Mount Namespace in memory while containerized processes are running. This makes it possible for a user with admin privileges to view the contents of your encrypted container while it is running. It’s not really possible to prevent this type of privileged access without specialized hardware.

Podman: images are encrypted leveraging the registry (and decrypted on disk)

Similar to the example of signing and verifying containers cited above, Podman relies on an OCI registry to encrypt containers. The basic idea is that you create a public and private key (stored in pem files) and you use the public key to encrypt the container as part of push command with the --encryption-key argument. Then you can use the --decryption-key option with the pull command when you want to retrieve the container. There is a good blog post/tutorial on encrypting containers with Podman here.

The downside of this workflow is that your containers are encrypted in the registry and in transit, but they are decrypted on your machine. This differs from the Apptainer workflow in which containers are only decrypted within a new mount namespace in memory. The practical result is that unprivileged users on a multi-tenant system may be able to obtain data from an encrypted container if they can access the layers in the disk cache.

Podman has a command group called secret that can be used to store sensitive material. The gist of this command is that it creates an image file containing your secret(s). The file is encrypted on disk and then decrypted and used as an overlay on your container at runtime. This workflow moves one step in the direction of the Apptainer solution, since it is encrypting a file. But the encrypted file is separate from the container itself and is meant to be created and stored on your local machine. It is not pushed to an OCI container with the normal push command.


Apptainer doesn’t provide unprivileged network interfaces

In the Cloud Native space where OCI containers are used extensively, networking if crucial. But in HPC, networking is rarely a concern, and the host network is usually sufficient. Because of its emphasis on HPC, Apptainer lacks unprivileged support for network operations (even when running in suid mode). You can enter a new Network Namespace without any special privilege in Apptainer simply by passing the --net flag. But all this really does is break your container’s network. To do anything useful, you must specify interfaces to bring up inside the container with the --network option in combination with configuration files in the Apptainer installation tree. Under the hood, Apptainer uses the Container Network Interface (CNI) package to set up network interfaces, perform port-mapping, etc. You can learn more about networking with Apptainer here.

Because of Podman’s emphasis on networking in containers, rootless Podman uses slirp4netns to provide unprivileged network support. slirp4netns allows unprivileged users to utilize most (but not all) of the same networking features through the Network Namespace as they would be able to use with privileges, albeit at a cost of performance.

Podman has (User Namespace related) trouble with network file systems

The Podman documentation states that rootless Podman:

Does not work on NFS or parallel filesystem homedirs (e.g. GPFS)

This is evidently because NFS already has to juggle file permissions between the server and client side and the User Namespace UID mapping is too confusing. This is not an issue for Apptainer because it has the ability to map the UID on the host to the same UID within the container without relying on the User Namespace.

Interaction with SELinux

In doing some of the research for this article, I had trouble using Podman on a system where SELinux was enabled. In particular, I was unable to mount a volume from the host into the container in rootless mode and then create a new file within the shared volume. After a few unsuccessful online searches and a hint from Cedric Clerget, I found that Podman confines container processes to a specific SELinux context that prevents them from writing to the host file system. This pithy blog post describes the issue and the solution. To summarize, the :z and :Z options will set the SELinux context on mounted volumes to allow any container and one specific container write access respectively.

By default, Apptainer does not set any special SELinux context for containerized processes, but you can control that with the --security="selinux:context" option/argument pair. For instance:

$ apptainer shell --security="selinux:system_u:system_r:container_t:s0" container.sif 

Apptainer> touch foo 
touch: cannot touch 'foo': Permission denied

So this is another instance where Podman and Apptainer have the same security-related feature, but opposite defaults.


This is a quick table summary of our compare and contrast effort across our 3-part security miniseries.

Apptainer Podman
History provided a mechanism for unprivileged users to run containers before widespread adoption of User Namespaces was created as a daemonless and rootless drop-in replacement for Docker
Implicit use of User Namespace can provide unprivileged access to containers with or without User Namespaces requires User Namespace for unprivileged access to containers
Explicit use of User Namespace duplicates the user’s UID/GIDs from the host via config files and can leverage the User Namespace to spoof root or other UID uses the User Namespace to spoof root in the container by default and can map the user’s UID/GIDs into the container through the User Namespace on demand
Unprivileged Installation supported by the community through a convenience script theoretically possible but unsupported
Signed containers signatures are saved in the SIF file and containers can be verified on an individual or system-wide basis using any number of signatures signatures are managed via a separate server or OCI registry and containers are verified on a system-wide basis using a single signature
Encrypted containers SIF files are encrypted; they are encrypted in registries, in transit, and on disk encryption depends upon an OCI registry; file system info is encrypted in the registry, in transit, and decrypted on disk
Unprivileged networking doesn’t exist; all network operations require privilege is implemented through slirp4netns
NFS and parallel file systems can be used without problems can run into issues if the home directory is served from a parallel file system because of problems with UID mapping
SELinux defaults to no specific SELinux context defaults to an SELinux security context that prevents mounted volumes from being written to and volume context can be changed with the :z/:Z options

Related posts

A New Approach to MPI in Apptainer

A New Approach to MPI in Apptainer

Jun 27, 2023

Apptainer / Singularity

Apptainer 1.1.0 Is Here, with Improved Security and User Experience

Apptainer 1.1.0 Is Here, with Improved Security and User Experience

Sep 29, 2022

Apptainer / Singularity

Apptainer Official PPA for Ubuntu Is Now Available!

Apptainer Official PPA for Ubuntu Is Now Available!

Feb 2, 2023

Apptainer / Singularity