CIQ

Singularity Is Now Apptainer! What Does This Mean for You?

February 17, 2022

Webinar Synopsis:

Speakers:

  • Zane Hamilton, Vice President of Sales Engineering, CIQ

  • Forrest Burt, High Performance Computing Systems Engineer, CIQ

  • Ian Kaneshiro, Software Engineer, CIQ


Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.

Full Webinar Transcript:

Zane Hamilton:

Good morning. Good afternoon. Good evening. Wherever you are. Welcome to the CIQ webcast. For those of you who have been with us before, welcome back; for those of you that are new, we appreciate you coming. We appreciate your time. Go ahead and like and subscribe so you can stay up to date with what we are working on and what we are doing. My name is Zane Hamilton. I am a director of solutions architecture here at CIQ and over the last few weeks, we have been doing several of these. If you did not see last week with Greg Kurtzer talking about Warewulf, check it out. Great talk this week: we are going to be talking about containerization and we are going to be talking about Apptainer in particular. I have a couple colleagues with me, Forrest and Ian. Welcome Forrest, welcome Ian. Forrest, you have been on here before. Most of these people know you, but go ahead and introduce yourself.

Forrest Burt:

Hi everyone. My name is Forrest Burt. I am a High Performance Computing systems engineer here at CIQ. I have been a user of Singularity since about version 3.03 or so when I used it to deploy containerized workloads across some High Performance Computing architecture at a previous institution. I am very excited to be here discussing Apptainer and to have seen the transition from Singularity to Apptainer. Thank you for having me, Zane.

Zane Hamilton:

Thank you. Ian, you are new to the channel. Welcome. We appreciate your time. Tell us about yourself.

Ian Kaneshiro:

It is great to be here. My name is Ian Kaneshiro. I am a software engineer here at CIQ and a maintainer of Apptainer. I first got started with Apptainer, in its previous form as Singularity, back in 2018, when the project was undergoing a rewrite from version 2 to version 3. I was involved with the initial version 3 release and have been with the project ever since.

Zane Hamilton:

Thank you. I think we should probably level set a little bit and let's kind of define what we mean when we are talking about container. How is Singularity / Apptainer different? How is it the same? Just a little bit of a level set, if you don't mind.

Apptainer vs Container [01:55]

Ian Kaneshiro:

We can start off with what is a container, and from my perspective, I think the way that users of Apptainer use containers is really a way of packaging an application and including all of the application dependencies, in a way that is portable across systems. If you build a container on one system you should be able to run it on another system. All of the shared libraries and things like that, which an application might depend on, are actually packaged within the container image and brought around with it to whatever machine that is being run on. I would say the primary use case that users of Apptainer reach for containers is for High Performance Computing workloads. The requirements of these environments are different from what you would typically see in environments that run Docker, where you need hard separation of users by POSIX permissions. That means you cannot have a root-owned Daemon spawning containers that any user can connect to and control. With Apptainer, this allows users to spawn containers as their user and all the processes spawned by Apptainer, stay as their user and are actually unable to escalate privilege at all.

Zane Hamilton:

Staying in that same vein, why would someone choose Apptainer for their HPC environment? I mean, you gave some pretty good examples. Is there any other reason? Dive into that a little bit.

Why Apptainer? [03:19]

Ian Kaneshiro:

One of the unique things about Apptainer is the image format that it uses. Most of the container ecosystem, and actually all of it to my knowledge, uses container images in the format of a root fs directory. The image itself is stored as a tar and then extracted into a directory, then used directly from there. For Apptainer, we actually have a SquashFS file system inside of our SIF images. We mount that directly using a loop device. This actually leads to some beneficial performance on shared storage systems, because it minimizes the metadata lookups on those types of file systems, which is very useful in HPC environments that tend to use those kinds of storage systems.

Zane Hamilton:

Excellent. Go ahead, Forrest.

Forrest Burt:

I was just going to say, Apptainer also has a big focus on integration over isolation. There is a big focus in High Performance Computing environments on being able to enable different kinds of specialized pieces of hardware and software, to be used by applications that are inside of a container. Rather than try to isolate the container from the host, Apptainer actually works to integrate the capabilities of the host within the container. You can do things like run MPI stacks out of it, be able to interface with GPUs, and things like that. Common aspects and pieces of different technology found in HPC environments can be pretty easily integrated into an Apptainer container and run across that architecture. That is another kind of big pull of Apptainer to HPC.

Zane Hamilton:

Excellent. One of the things that I really wanted to talk about with you guys today is that transition from Singularity to Apptainer. I know I say Singularity a lot of times now; I catch myself saying it pretty much all the time. It is new to me. I just want to understand a little bit, what does that migration look like when you are moving from Singularity to Apptainer?

Singularity to Apptainer [05:34]

Ian Kaneshiro:

The migration from Singularity to Apptainer has two aspects to it. There is what the system administrator needs to do in order to install Apptainer and retain the type of configuration that the container system had in its previous installation with Singularity. Then there is the user aspect, which is really just about migrating user configuration that was used by Singularity to be exposed to Apptainer for basically continuity. I think Forrest has a demo for those two aspects that he can go into.

Zane Hamilton:

Is there anything specific that you have to do? Do you have to rebuild that, from your definition file as you are moving from Singularity to Apptainer? Or can you just run the containers themselves?

Ian Kaneshiro:

For the migration itself, we are talking about the actual host machine that the Apptainer program or formally Singularity program is installed on in order to run containers. Zane's question is about users using containers, which have already been built with Singularity, with Apptainer in a new installation. The answer to that is, there is no rebuilding or changes needed. Apptainer will run all of your Singularity-built images just as in the same manner as Singularity would run them. One of the core things about doing this rename of the application is to minimize impact users ,and using the same container format, and ensuring that old containers will work with Apptainer as well as containers built with Apptainer will work with Singularity was very important in order to make sure that, you don't have disruptions to how the community uses containers.

Zane Hamilton:

So there is backward compatibility as well.

Ian Kaneshiro:

If your system is using Apptainer and your colleague has Singularity installed on their cluster, because they have not upgraded yet, if you build a container with Apptainer, you are going to be able to share that with them and they will be able to verify your results and do things like that.

Zane Hamilton:

Excellent. Forrest, I think you are going to show something.I know we have a question out there about other use cases. Let's let Forrest get through this and then we will dive into that.

Forrest Burt:

I have a real quick demo to go over the components of that migration. 

I am sitting here on a VM. This is basically just running Rocky Linux. What I am going to show you here is the system and the user side of this migration. This Rocky machine has Singularity installed on it. You can see, I can do  ‘singularity - -version’ and you will see, we have version 3.83 on here. You can see that, in just a moment, I will go ahead and run Apptainer and what we are going to see happen here is, first off, there will be a notice printed that is going to say something to the effect of, there is still a Singularity configuration directory found out there, and it does not appear as though the CIS admin in charge of this server here has migrated those configuration options from Singularity to Apptainer. Then you also see a message come up that shows some info about some of my user Singularity configurations being moved over to Apptainer.And you will see a couple of messages, which describe exactly what is being moved there. 

I will go ahead and run Apptainer for the first time. This is, as I alluded to, the first time I have run Apptainer on this virtual machine. You will only see, for example, one of these migration messages once, the user side one. The system side one will continue to appear until that migration is done. I will show you that in a moment. But we will go ahead and run, Apptainer for the first time. We will see that up here we have a few messages, namely this ‘/usr/local/etc/ singularity exists, migration to apptainer by system administrator is not complete.’ As I mentioned, that is the system side of the migration, notifying you that it is not complete. We also have these detected Singularity user configuration directory messages, about a public and private PGP key ring. Those are about those being migrated. 

If I go to my per user Singularity file, which I have here in my user home and I do this, you can see that I have two existing PGP keys sitting in there. Now if I look at  ‘.apptainer,’ which did not previously exist before I ran this command, you can see we have ‘ls .apptainer/keys/’, and we can see that those keys have been migrated over. That is how your user configurations will get moved over to Apptainer when you run it. When you want to do the sysadmin side of it, we will navigate to ‘usr/local/etc.’ We will just take a look at what is in here. You can see we have Singularity and Apptainer.

If we take a look at what is in Singularity, you can see we have the system Singularity configuration files in here. These are things like Singularity.conf, the library lists for GPU support, that kind of thing. If we want to go ahead and migrate our configuration over, that is just as simple as doing ‘sudo cp singularity/singularity.conf’  and then we will land that at ‘apptainer/apptainer.conf.’ As I mentioned, if you have custom Singularity configuration files you’ve done over time, we can do this to migrate them, and then once we have moved over everything we need to move over from that user local etc/ singularity, we can go ahead and I will just take the nuclear option here. We will do ‘sudo rm -rf singularity’  to get rid of that file entirely. That will get rid of it. We will go back here. Then when I run Apptainer, you will see that we do not get that sysadmin migration message anymore. You will see up here at the top, we do not get that per user message because that has already been moved over. Then, because we just moved the system stuff over and got rid of those Singularity files, we do not get any more messages saying that the migration is no longer finished. The transition overall from Singularity to Apptainer is pretty simple. It is basically just moving over your configurations, which you need to do on the system side, but it is pretty simple overall; it is a pretty seamless transition.

Zane Hamilton:

There are not really any lingering issues when you are transitioning from Singularity to Apptainer that you have noticed or run across, are there?

Forrest Burt:

No, not really. It works pretty seamlessly. If you are working with RPMs, you may have to do some un-installation there of the Singularity of like a previously installed Singularity RPM, because the Apptainer will want to install over that. You might have to do that, but in general, the instructions are out there for how to install Apptainer there on GitHub, and installing that from source is pretty simple, or using the RPM, if you want to go that way. One of the big points of the transition and the work that has been going on the engineering side of that thus far has been to ensure that backwards compatibility is pretty seamless. Thus far that has pretty much gone well. There is not too much else you have to look at there. Ian, do you have anything you would want to add to that or is it pretty much seamless at this point?

Ian Kaneshiro:

No. You covered it. If there are any issues, we would love to hear about it. I think in the description, there is a link to the Apptainer on GitHub. So you can post an issue if during your migration, you come across problems that need to be addressed.

Zane Hamilton:

Very good. One of the things that we keep running across, or I keep having brought up in my discussions, and a lot of that is outside of just HPC but kind of in the enterprise spaces, is: what else can be done with Singularity? What other use cases outside of HPC are we seeing?

Singularity Use Cases [13:21]

Ian Kaneshiro:

I would say any use case where you need to have the ability to access a lot of resources on the host. What most container systems' goal is to give you the view of your container as if it was a VM, so as if it was isolated from the host and its own thing. And so the container process basically doesn’t have any real concept of anything else on that machine other than what was exposed to it very intentionally, whereas Apptainer has the concept of integration over isolation. By default, we do things like expose host file systems and host devices into the container to make them accessible because the core use case is really about getting the most out of the hardware that you are running your container on. I would say, outside of HPC, I am not sure which way to go with that question because, when I think of HPC, I usually think academic, but there is a lot of enterprise workloads that are moving towards more HPC style. Some enterprises have always been in an HPC kind of arena that has been doing real-world modeling, in order to do simulations of basically the world around it. Whether that is doing structural analysis or computational fluid dynamics, those types of workloads have always been important outside of the academic community in order to do engineering work.

I would also say, Apptainer has some interesting features related to signing and encryption, where if your use case requires things like being able to guarantee that the container you are using was created by the holder of a PGP key, you can do that with Apptainer very easily. What Forrest showed in his demo was moving a public and private key ring. That can be used with Apptainer in order to check to see if a container image, a SIF image, which natively supports these types of signatures has been signed by a particular individual and whether it has been signed by an individual that you might require in order to run on your system. We have the concept of an execution control list related to signatures so an administrator could say, “I only want these identities to be able to build containers that run on my system,” and then that will be enforced by the container itself. There are also features around container encryption. We encrypt the container file system and all of the application data, encapsulated within that container. That allows you to have not only encryption in trans of the container image, but also at rest on the node. It is only decrypted within the memory of the node, when it is actually run for execution.

Zane Hamilton:

Very nice. I think there was another question that just popped up about migration: any other files/directories need to be migrated or changed, Forrest, as you are going through that, or just that Singularity directory?

Forrest Burt:

I am going to pass that off to Ian. Those are the ones that I am aware you have to do. Ian, are you aware of anything else that you need to do just moving those system files and making sure that your per user configuration is over?

Ian Kaneshiro:

No, what I will do is clarify a little bit on the system administrator side of things. The one main change for the Apptainer configuration directory, I think it is called SysConfigDir, in our build system and then our RPM packaging. The actual main configuration file was previously called singularity.conf, and now it is called apptainer.conf, so you will need to do a file rename. All the other files are named identically as they were for Singularity installations, and so you can actually just move those over directly without a rename.

Zane Hamilton:

It seems like this whole thing is set up to be very easy to maintain. Maintaining an Apptainer environment seems fairly straightforward. Am I off?

Ian Kaneshiro:

That is one of the goals, not only of this rename, but of the project itself is to make the workload of the administrators administering these clusters easier. If there are sharp edges or things that are problematic for administrators, we want to hear about it in order to help alleviate that.

Zane Hamilton:

But usually whenever I think of something being simple, I also think of limitations that come along with it. Are there limitations with what I can put into a Apptainer container or number of applications I can put in it? What are the limitations of it?

Limitations of Apptainer Containers [17:47]

Forrest Burt:

I would say that the limitation is probably more practicality than anything. There is not really a limit to what you can put inside of that file system. It is pretty extensible how you can define what you want the container to do on build. And you can basically install things into it on more or less the same procedures you would be using on the host. There are caveats that are related to how containers technically work, but, for the most part, you can basically do anything that you would do on the host. I personally have never met an application I could not containerize with Apptainer. Like I said, it comes down to what is going to be practical, what is going to be the best moving forward to manage that container. If it has a lot of different software dependencies things that are going to have to be updated, maintained, things like that. It basically comes down to practicality and how much maintenance you want to have to do of that container depending upon what exactly you have put in it, but no technical limit really to what you can put inside of one.

Ian Kaneshiro:

So continue on that thread. One of the limitations I would say is not actually with what you could put into a container, but how you can run the container. Apptainer is a Linux container platform, which means you need to have applications that are able to interact with a Linux kernel for system calls. You could not take a Windows application or a Mac OS application and put it inside of Apptainer and be able to expect it to run just like you want to be able to install those on a Linux host and expect them to run natively. You could try using something like Wine to do emulation, but for the most part, users of HPC, which is kind of our core crowd, are already using Linux and so  it’s a very natural fit.

Zane Hamilton:

Sure. So making that statement really means it is very kernel dependent.

Ian Kaneshiro:

It is kernel dependent in terms of the kernel flavor, but with respect to kernel versions, within Linux, it is actually very portable. We recommend having a kernel of 318 or higher, which is a relatively old kernel, and the kernel itself as a project has done a great job of being very stable with its system call interface, which is the reason why Linux containers are able to exist in the form that they do.

Zane Hamilton:

We have a question about environment variables. Can you see those environment variables after your container in Apptainer has been executed?

Environment Variables [20:05]

Ian Kaneshiro:

Yes. Apptainer will automatically forward environment variables, so the backup Apptainers are usually spawned from a user shell or from a batch script. Once Apptainer is called, it will actually take the environment that is currently existing, where it was called, and propagate that to the container environment. You can also explicitly set environment variables that you only want to be exposed within the container by using a particular set of prefixes, an APPTAINERENV_, and then the environment variable name and the value. In general, and in the standard use case, Apptainer will automatically forward environment variables to your container. You can use flags if you would like to prevent environment forwarding in order to stop that from happening.

Zane Hamilton:

This is an interesting question. I have not ever really thought about this before, but will there be an Apptainer hub, kind of like Docker hub, Singularity hub?

Ian Kaneshiro:

I think that is a good question. And I think, it really comes down to what are the needs of Apptainer and of SIFs, in order to be stored and used by the user base. The nice thing about the image format being a single file is you can use any form of storage that is able to handle blob storage, essentially. Singularity hub is in read-only archive mode right now, so you can still pull images and run them. But if you need a place to push SIF images, you can use things like blob storage systems like S3, and you can also use OCI registries that support the OCI artifact specification, which is essentially a way of using OCI artifacts as arbitrary blob storage, which fits nicely with current infrastructure. If a registry supports that API, then you can store both your Docker images and your Singularity images in the same registry, side by side.

Zane Hamilton:

All right. So I know, Forrest, you talked about MPI applications a little bit earlier when we started looking at running Apptainer and dealing with file systems - how does that work? Can you use external file systems or will it handle external file systems?

Apptainer and File Systems [22:25]

Forrest Burt:

Most definitely. This ties in a little bit with what we were saying about the integration versus isolation approach. Apptainer is basically sitting on your server as a single runtime. There is no Daemon-based system, or anything like that, that images have to be built against. It is basically just the single Apptainer runtime, which runs the single Singularity image file container files. When you are actually using Apptainer on a server, you can basically just treat your Apptainer containers just the same as you would any other file on that, and run those with that run time. It also gets into integration over isolation. Apptainer, which, as we have said, is built to interact with that external file system with regards to host devices and host capabilities, host software stacks, things like that. There is an ability to bind files in, for example, from that host file system if you need to do that, and those can appear at a location in the container that you’d like them to be at. Apptainer in general doesn’t really have a problem dealing with external file systems. It is really meant to integrate with those external file systems, so that is one of the primary points of it.

Ian Kaneshiro:

I will just add onto that, too, to say that by default Singularity mounts the user home directory into the container. For usual work with HPC resources, that user home directory is typically a network storage system, and there is also usually a slash share. Singularity will mount like your home directory. You will have all of your user configs that you would normally have on that node inside of the container. It will usually just feel like you have swapped your user space operating system when you use a Singularity container or Apptainer container - saying the wrong name.

Forrest Burt:

I have something that can show off being the same user inside and outside of the container, if we would like to do that real quick.

Ian Kaneshiro:

Yes.

Zane Hamilton:

I think this leads into the next topic I wanted to get into is that multi-tendency. How does this work in a multi-tenant HPC environment?

Forrest Burt:

To show you what maybe a typical flow would look like for getting a container from some type of registry out there onto your server and then being able to use it with Apptainer. I will just go ahead and show you. We will do something really simple here in the Ubuntu container. We will do ‘ubuntu: focal’ and we will just go ahead and pull that.

Ian Kaneshiro:

While he is pulling that, I will just say that what is happening right now is he is actually pulling a Docker container from Docker hub, the default Ubuntu image, or I guess the focal tag, and in the background, we are quickly building that into a SIF and caching it within his home directory in order to use it as a SIF for future commands. You can use any OCI image as a source for an Apptainer container.

Forrest Burt:

One big thing to note about when we are doing this, is that there is no concept of layers essentially inside of an Apptainer. While Docker and containers from other run times are built as a layer format, Apptainer essentially takes all those layers and squashes them together into that single SquashFS container file system. There is no concept of layers in it and it decreases your attack plane there for cybersecurity and other purposes, which is useful. You can see that this has gone ahead and finished. We have created a SIF by doing ‘ls’ You can see that we can do focal over here. I will go ahead and do really quickly, just cat this text file here, so you can see what is in it. 

If I go ahead and do ‘apptainer shell,’ and then I put in this container here to get a shell into this container, you will see that I get an initial error because I am sitting on a VM and this happens. Really quickly, I will do ‘cat /etc/os-release.’ You can see that I am here on Rocky. If I do a ‘whoami,’ you can see that I am ‘test,’ just a test user on this VM. If I go ahead and do ‘apptainer shell,’ you will see I am now inside of this container and I have a shell into it. If I do ‘cat /etc/os-release,’ you can see, we are now on Ubuntu. If I do an ‘ls,’ you can see that I can still access all of my standard files here. I can cat that right there.

If I do ‘whoami,’ you will see I am still testing here. I can even do things like, for example, ‘this is more test text’ and append that to this file that I have here. Then I can even do this. You will see that those are both visible inside of the container here. This file, right here. You will see, we have managed to append that successfully, and we have created this other file successfully. If I exit out of the container, you can see that both of those files are also present, and I can do this to see the file that I created right then. We only expect one line, and then checking the other one. You can see we have both the original line and the one that I added there. Point being here, as we discussed in the multi-tenant environment of an HPC cluster, the standard way that Apptainer runs is to make the user inside of the container, the same user as outside of the container and preserve all those access controls. 

If I go, once again, just to show you, if I go into this container and I try to say ‘cd / root,’ I get permission denied. If I try to do this, you can see once again, permission denied. I remain my same user with my same user's permissions inside of the container. There is no elevation or anything to work out there. In the multi-tenant environment of an HPC cluster, that is how Apptainer works to preserve access controls and make it a viable option for people to deploy their workloads.

Ian Kaneshiro:

One thing I will add there is just that the processes that are spawned by Apptainer are using a process, I don't know the right word for it, but we set a parameter called PR no_new_privs, which basically means that that process and any descendant processes have no ability to gain any additional privilege on the system than they currently have. Typically, this actually makes running any kind of application inside of an Apptainer container safer than if you ran it just as your user on that system, because sometimes you might have capabilities assigned or the ability to use UID programs and things like that. This actually just blocks all escalation.

Forrest Burt:

The recent, for example, pkexec vulnerability that came out when tested in Apptainer, as I believe we have a video out there of, trying to exploit that inside of an Apptainer, is just shut down natively because of its security model. There is a real world example of how Apptainer increases the security on your system beyond that of which that is available normally.

Zane Hamilton:

One of the questions that came in, and I am not sure I understand the question fully, but it is asking about build remote. Will there be the ability to build remote?

Build Remote? [29:55]

Ian Kaneshiro:

The API, what that was essentially an API that is not widely adopted within the community and to the point where we can make an open standard. It is solely for a commercial organization's cloud platform, and as a part of our migration to the Linux foundation, we needed to remove proprietary APIs from our application, so we had to remove that one, in order to be compliant. I would say that most of the benefits of remote build are actually, something that you can replicate with what we call our fake root feature. So on standard installations of Apptainer, when you build, you can use the dash dash fake route flag. What this will do is allow you to run your build as if you were a route user inside of the build environment when you are still actually not on the system. It is still safe to do because the primary use case of the remote builder was to do container definition builds, which required you to be on the system in order for things like package managers to properly work, since they usually do a straight user ID check to make sure that they are zero. A fake route allows these types of bills to progress.

Zane Hamilton:

Very good. Thank you. Forrest, I know you and I talked about this one about before, but does Apptainer support or have support for GPUs?

Does Apptainer have support for GPU's? [31:26]

Forrest Burt:

Absolutely. GPUs are one of the pieces of hardware that Apptainer is built to support, because those are a massive High Performance Computing use case. At the moment there is native support inside of Apptainer for both NVIDIA and AMD GPUs. When we were looking at those system configuration files earlier, you may have noticed an ‘nvliblist’ and a ‘rocmliblist’ file. Those are maintained. Those are, essentially, what the container needs in order to be able to support GPUs inside of it. That is a list of libraries that essentially have to be bound to the container when you want to run a GPU application. Apptainer has a specific flag dash dash nv that brings, for example, those NVIDIA libraries into the container to make them available for applications that need them. Apptainer absolutely supports GPUs. That is a huge High Performance Computing use case. It even has native support in it for the two major GPU manufacturers out there.

Zane Hamilton:

That is great. So Ian, I have another question for you since I have you here. Is there anything specific that I have to do to Apptainer to get that involved with my HPC job scheduler?

Job Scheduler and Apptainer [32:34]

Ian Kaneshiro:

Oh, from using it with a batch scheduler, like PBS or those types of things? Not really, you just need to include those inside of your batch scripts in order to take whatever application you would normally run within your batch system and basically add the Apptainer exec with the container image you want to run, inside of your scripts. And that will automatically, or go ahead and spawn those containers on the fly as if you would spawn any normal application, when your batch scheduler goes and runs that script on a node.

Zane Hamilton:

Very nice. Thank you. Now that Singularity is Apptainer, is there anything else that is going to change? Anything to look forward to?

Changes and Features [33:22]

Ian Kaneshiro:

Sure. I would say the main new feature is currently an experimental feature and it is going to be within V1 of Apptainer. It is currently in the release candidates as well that are out right now. That is the ability to checkpoint container instances using a project called DMTCP. This is a project that allows transparent checkpointing for dynamically linked applications. This will give us the ability to do things like run a container instance on one node. And if for some reason that container job either needs to be migrated, because maybe that node has an issue and there is concern that it might fail during part of its job, you could checkpoint that container instance and then go ahead and start that container instance on another node, assuming they both have the same shared storage system and your home directory is available on both of those nodes.

Zane Hamilton:

Very exciting.

Forrest Burt:

Broadly, with the feature of Apptainer, there is a roadmap out there that describes some of the features we are looking at. I know for example, I am really excited to see increased ability to use Apptainer with some more specialized High Performance Computing hardware and software, which is coming out right now. I know that Apptainer has a really exciting future ahead of it with regards to what we are doing with High Performance Computing technology. I am very excited to see where that goes and how some of our new features that we have been talking about on a roadmap get implemented.

Zane Hamilton:

One of the questions I have for you two is, why did Singularity get renamed Apptainer and then moved into the Linux Foundation? What did that achieve?

Why Rename Singularity to Apptainer? [35:08]

Ian Kaneshiro:

The rename is actually a direct consequence of the decision to move into Linux Foundation. When the opportunity arose to move into the Linux Foundation, basically, we asked, reached out to the community and said, “Hey, if you want to be involved with the future of this project, please get back to us” because we wanted to be able to talk about this kind of move. Not in a public forum, not like super secret, but we wanted it not to be public in order to generate hype at that point. We wanted to get those involved within the community and invested in the future of the project to be able to be involved in the decision-making process of that move. So we went ahead and sent out, basically, a document. I think it was to articulate what would be necessary in order to join the Linux Foundation.

One of those things was a rename of the project because the Singularity name itself is something from pop culture and is impossible to trademark. Or I am sure that makes sense to most people. We needed to have a name that the Linux Foundation could hold a trademark for in order to protect the project. We asked for suggestions for future names and what they thought of the move. And when we got all the feedback , it was clear that moving to Linux Foundation was the right move. We had to do the hardest thing programmers can do: name something. We sent out a poll to those community members and Apptainer won. That is how we ended up with Apptainer within the Linux Foundation. I will say the reason for moving into the Linux Foundation, a lot of the benefits are really about ensuring the project stays within the open source community and cannot be controlled by a commercial entity.

Linux Foundation does a great job of ensuring projects are open source projects and any interested contributors or parties can be a part of those communities. It is really important to the Apptainer community. It also gives us the ability to cross pollinate with some of the projects within the Linux Foundation or under the Linux Foundation umbrella within the HPC scope. We would love to be closer with projects like OpenHPC, as well as within just the general container ecosystem, which is primarily cloud-dominated. And so, organizations like the CNCF have a lot of interesting initiatives and projects, and we would love to have those as potential integrations.

Zane Hamilton:

Excellent. You have mentioned the community and people being involved in that. How do people get involved in the Apptainert community?

How to get Involved [37:46]

Ian Kaneshiro:

I think in the description, there is a link to our website to the GitHub repository and our Slack. I think the most accessible way for most people is to just join Slack and say hi, say what you are interested in. If you have any questions about the project, you are always welcome to ask questions there. If you have issues where you are running into problems with the Apptainer program itself, you can go ahead and either ask those questions within the Slack channel, but you can also post issues within our GitHub repository. That is where we do all of our tracking of issues and how to make sure things get fixed. That is a better location for placing an issue itself.

Zane Hamilton:

Excellent. I do not know if we have any more questions out there; if you have a question, post it. I will see if we can answer it real quick while I have these guys' time. And if we don't, I just want to thank you for joining. Thanks, Ian. Thanks, Forrest, for spending some time with us. Forrest, do you have anything else you wanted to talk about?

Forrest Burt:

Just that with that increased push for containers in the cloud, and that discussion of containers and job schedulers and such like that, not to imply too much, but job schedulers and HPC, it’s traditionally a field that containers have struggled to go into for the reasons that we have discussed here – the security model and the specific needs of the environment and stuff. I just want to really quickly point out that with this new revolution in HPC that is going on with the movement to containers and everything, something that we are really big on here at CIQ is mass orchestration of those containers in that HPC environment. To that end, we have something very big we are working on in that regard. We are very excited to bring that to the broader market. Keep an eye out for that out there.

Zane Hamilton:

We just had one pop in, asking about new features and updates and Apptainer over Singularity. I think we kind of touched on that a little bit with it, being able to basically suspend a container and move it if you needed to. Is there anything else, Ian, that you can think of?

Ian Kaneshiro:

No, I would just say that, one of the things to look at for Apptainer is functionality overall is going to be improved because if you are using one of the older versions of Singularity and have not migrated yet, all active development and bug fixers for the projects are going to be going into Apptainer. If you have any issues, Apptainer is more likely to contain the fix for that bug.

Zane Hamilton:

Excellent. Well, guys, if we don't have any more questions, we will wrap this one up. Appreciate you coming, looking forward to next time, I think probably two weeks from now. We will have another topic for you. Ian, appreciate the time, Forrest as always. It is good to see you two and thanks for joining. Do not forget to like and subscribe. Appreciate it.