CIQ’s HPC Stack and How to Best Utilize
This webinar will cover the CIQ stack for traditional HPC, made up of Rocky Linux, Apptainer, and Warewulf. We'll be discussing the integration of Apptainer and Warewulf into Fuzzball (HPC 2.0) and how they both run on Rocky Linux. We love hearing from you! Our experts will be answering questions live throughout the webinar.
Webinar Synopsis:
Speakers:
-
Zane Hamilton, Vice President of Sales Engineering, CIQ
-
David Godlove, Solutions Architect, CIQ
Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.
Full Webinar Transcript:
Zane Hamilton:
Good morning, good afternoon, good evening. Welcome back to another CIQ webinar. Today we're going to be talking about the CIQ HPC Stack and best utilization for it. And today we have Dave Godlove on with us.
Dave Godlove:
Hey, everybody.
Zane Hamilton:
Excellent. It's all working, Dave.
Dave Godlove:
Great.
Zane Hamilton:
So Dave, I think we've spent a lot of time over the last couple of months talking a lot about the individual projects and tools that CIQ works on and that we support and that we're building, but I don't know if we've ever actually tied them all together and told that story. So if you would help me go through this and let's talk about from the base up, what are we doing at CIQ from a foundational level up through an innovation level, and let's talk through our products as a whole.
CIQ's HPC Stack [00:52]
Dave Godlove:
To reverse what you said and start off at the top level. We're basically rebuilding high performance computing from the ground up at CIQ is the way that I see it. We are just sort of reinventing the way that high performance computing is done, and we're making sure, right now, I think that HPC it's a little bit different at every site, and part of the reason it's a little bit different at every site is because it's this collection of tools that people sort of pick and choose and grab this and that, and put together their own solution. Which is good in a way, but also it's piecemeal. And so what we're doing at CIQ is we're saying we can take all these pieces, we can take the entire stack from the bare metal all the way up to the applications that the scientists are running, and we can put it all together in a coherent hole. And so that's what I really see that CIQ is doing.
Zane Hamilton:
So if we're going to go that direction, I don't care which direction we go. But I was saying at the bottom, starting with Rocky Linux and where we've come from with the end of life of CentOS and becoming CentOS Stream, now we have Rocky Linux, Rocky Linux Nine is now out. So that becomes the foundation of it. And I think everybody's pretty familiar with Rocky, and we've told that story a lot. But if you go up a level from that, from the CIQ perspective, we start talking about Warewulf and I know we've spent some time on it, but tell me a little bit about what Warewulf is and what problem it solves.
What is Warewulf? [02:23]
Dave Godlove:
Yeah, so Warewulf is obviously a provisioning system, which has been around for a fair amount of time at this point. I guess I missed 20 years at this point. But at the same time, even though it's been around and it's a mature product, it's also really very new because Warewulf four is a totally new take and a new rewrite of the old project and it's breathing some new life into the project as well. What most people are probably familiar with it as a provisioner. So a way that you can get your cluster up and going and make sure not only that it's provisioned properly to begin with, but continues to get appropriate updates out to all the nodes of the cluster.
But one of the things that is really great about Warewulf is the way in which it's tying together not just traditional on-prem, but also cloud resources, right? So it's not just you can provision not just your local on-prem HPC cluster with Warewulf, but you can also use it intelligently as we do to fast forward a little bit with fuzzball. You can also use it intelligently to provision your cloud resources. And so we're taking these two what would've traditionally been these two disparate ideas, these two different ways of doing things. And we're like wrapping our arms around both of 'em and pulling them back together and making them work in both instances.
Zane Hamilton:
That's great. I know the stateful versus stateless comes up quite a bit, Dave, and I know Warewulf 4 is a stateless provisioning system. Why is that something that is important or what benefit does that give a user to have it be stateless?
Stateless [04:30]
Dave Godlove:
Yeah, I mean I think that one of the things that it gives you is simplicity of your hardware and simplicity of just the way in which you provision. I myself have not spent a whole lot of time provisioning notes. I mean, I work on things at the opposite end of the spectrum. I'm more of a scientist sitting on top of the stack and trying to figure out things that I can do to break it and make the admins upset. So I do know that is a, I think it's a heartfelt topic that a lot of people really feel strongly about. But yeah, I know that if Greg were here, he would probably just sort of laugh and say that stateless is definitely the way to go. There's no need to worry. Stateless is a really great way to provision the nodes, and he'd have a hundred different reasons why. But I don't know the reasons off the top of my head.
Zane Hamilton:
Absolutely. And I think one of the things that we're running across quite a bit lately is people being concerned about being able to write things to local disk. And it's certainly something that you can still do, even in a stateless environment. You can still use local disc, you can still have those logs get written out so that if something happens, you can still go get 'em. And I think that's one of the things we're running across and having to do a little bit of that education is just because it's stateless doesn't mean that you can't still use a local disc. So we hear that a lot and there are a lot of ways that we can talk about that and discuss that. But one of the things that I know you're passionate about and you spent a lot of time with is Apptainer, and that's the next step up in the stack here. We talked about the ability to have it as a single file, it's easier to pass around and maintain the security aspects of it, but how does that really work in an HPC environment? Why does that make it a really solid and different way of doing things?
Security [06:24]
Dave Godlove:
From the security standpoint, so we've talked a lot about the SIF file format and what that enables from a security standpoint. How that allows you to be able to vet the sources of your container builds or just your containers that you run, how that allows you to run, if you happen to be in an untrusted environment, you don't necessarily want other users peeking in on what you're doing. But I think we used to talk about this a lot and I think we don't talk about it quite so much anymore because it's well understood, but the security of the container runtime itself is what's really key and what's really important to an HPC software stack.
I mean, we have to remember that the project that eventually became Apptainer Singularity, when Greg first started that there was really no other container run time that was designed to be run in a multi-tenant environment in the same way in which Singularity and then Apptainer was designed to be run in. And because of that, there was no way really to install other container run times on the environment or within the multi-tenant environments of HPC. So I've talked a little bit about it in the past, but I mean the whole idea of being the same user inside and also outside the container was really a very novel idea and still is a pretty different idea. And not only does that set up the basics for you to be able to provide security using the same kinds of tools that you would use on bare metal within that environment, but it also has all these cool side effects. It allows you to interact with the files on the host system seamlessly and to write files out to the host system, without having any kinds of weird permission or ownership issues. It allows you to see all the things on the system that you would normally be able to see and to interact with all the things. So it's really a hand in glove fit for HPC, which is really designed by HPC experts for HPC experts.
Zane Hamilton:
I think it's a really important point in this is when you start talking about multi-tenancy, and most of the HPC systems, people that we talk to, they are doing multi-tenant. And again, whenever you have a container runtime environment, everything is running at a root, obviously you have some serious concerns, possible concerns there, and that's what makes Apptainer very different. But I think when you start laying everything out from a building a container standpoint, and whenever you start importing those things back in, if I want to share that container out, the process is a little bit different. And then if I am in one of the regular other container environments, I gotta go build that thing layer by layer every time I pull it anywhere. With Apptainer, it's just a file. So I'm just really shipping around a file. It makes it very simple, and that way I can pull it into a bunch of nodes at once, it's a lot faster. I don't have to rebuild it on every node. Correct me if I'm wrong.
File Management [09:47]
Dave Godlove:
And I think we should start thinking more about that. So that's something that I've been running into recently and I've been scratching my head about and thinking about. If you're using OCI images, which are great for the things that they are great for but you have to have some sort of management layer, right? you have to have some registry, you have to have some manifest to be able to take all these layers and put them together. And so because of that, people are just really, really used to the idea of having a registry and having a place where they push the containers and then interacting with that registry through this layer when they want to know what kinds of containers they have on the system and everything.
Now, obviously you don't have to do that with SIF images because they're just files. So whatever file management scheme you can come up with to manage any other type of files, you can use the same file management scheme with SIF. But the thing that I've been scratching my head about recently is I've been seeing recently that a lot of people are still locked into that idea of we need this management layer. And so because of that, for instance there's a really popular tool that Singularity and Apptainer have been able to leverage for quite some time, which is called Oras. And Oras is basically a way to take OCI images and it's a way actually to take arbitrary data and store it as first class data within an OCI docker style registry.
And so some time ago Apptainer started to leverage that in order to be able to store SIF images on Docker or OCI registries, and to be able to pull those down and use the interface to be able to interact with those images. Well, we're using those quite a bit, and that's great. But you don't necessarily have to set up any kind of registry to store your images anymore. I mean, and that's, that's a weird idea, I think to a lot of people. And I think a lot of people will be like, well, wait a minute, how are you going to store your images outside of a registry? And obviously you get a lot of benefits with a container registry. You get things like being able to tag your images and so I wouldn't want to just be suggesting that everybody just goes and gets an S3 bucket and just dumps all their images onto it and then starts downloading them. But I think that we as a community, we could probably think a little bit more creatively about how to utilize that feature of SIF.
Zane Hamilton:
Along that, if I'm in an HPC environment, or I mean even an enterprise environment where everything is air gap, then you can't get out to a registry and you may not have the skill, the desire, the resources, whatever it may be, to actually host a registry. That's a big benefit to be able just to drop a file somewhere and share it out. So that's pretty cool, pretty important.
Dave Godlove:
Yeah, absolutely.
Zane Hamilton:
So one of the things that I've been impressed with is when it comes to building a container, I mean, with some of the other ones, if you're building an OCI compliant container or for some of the other platforms, you've gotta build out that definition file and you've gotta build it and then test it and make sure that it works the way you think it's going to. And if it doesn't, you gotta go back and rebuild it. And that can be fine. I mean, it works, but on Apptainer, being able to actually have a container, pull it down and have it just create an image of the file system locally that you can treat as a file system, or you can actually treat it as a container and do things to it and write into it before you ever actually create your SIF is pretty unique. It's different. That's not the way that other other Apptainer platforms work, and that can be, in my opinion, a little easier to get all the things you need into it.
Best Practices [13:48]
Dave Godlove:
Just like with many tools. It's a very, very powerful tool. And with that power comes the ability to really be destructive, right? So it's funny because I think the last one of these that we were talking, I was sort of trashing the concept of best practices. But, so now I'm going to jump in and start preaching about best practices because I gotta contradict myself, right? But yeah, I mean, the ability to grab your container, to dump it out into a bare directory and then to just shell into it and start writing to it and doing all that stuff, it's really, really powerful. I think that the best use case for it is probably development of new containers, like in a very quick, efficient way. So the way that I would, I would normally develop new containers, complicated ones, and have really complicated installations in them is do pretty much exactly that.
Take a best guess at what I need to begin with putting a bunch of dependencies that I'm pretty sure I need into the container. And then starting the container up, dumping it out into a sandbox, which is just a bare directory, starting it up as that with writeable permissions. And then I would usually have two windows open, right? I would have the one window in which I was noodling around the container and then I would have the other window in which I was taking notes on what worked, and not really even notes, but just like recording actual commands. And then I would continue to do that until I broke the container. The list of actual commands that I had would become the next iteration of the definition file. And then I would just rinse and repeat and continue to do that. And it's a really quick way to iterate on and to build up what can be pretty complicated containers without having to completely rewrite a new Docker file every time. Maybe if you're lucky, maybe build that locally. But if you can't then have to push that up to a CICD environment, wait on it, download it again, look at it, see what's wrong with it, that takes a long time.
Zane Hamilton:
Absolutely. And I think we glossed over the fact that Warewulf being OCI compliant now as well is a really interesting and powerful thing. So being able to deploy, I mean, we start talking about actually doing containers with Apptainer, but we're also doing containers at an OS level, which is unique. There aren't a lot of other things out there that can do that. I mean, go find a Docker hub and make a couple changes to the container and deploy it out to a stainless bare metal machine, it's different.
Warewulf [16:33]
Dave Godlove:
I'm still trying to wrap my mind around it. Warewulf in general, I'm fairly new to, but the whole idea that I can go to Docker hub and I can grab a container and then I can just install a kernel in it and make it bootable, and then boom, there's my image for my cluster. I mean, that's really, really cool.
Zane Hamilton:
That is really cool. It's certainly something I have never thought to even conceive the idea of how to do it. That's really cool.
Dave Godlove:
Well, I was just going to say like, Greg is really good about that. So I've worked with Greg for a fair amount of time now. I guess we're six or seven years we've been working together off and on, and I think that one of the things that Greg as a developer is really good at is taking ideas from one place and then seeing how they apply and seeing how you can just grab 'em and wrap 'em in and make them work in another place. I mean, if you look at the definition file that Apptainer uses to create containers, it's pretty similar to packaging. It's pretty similar to how you package things up. He got the idea I think of, I'll probably totally misquote him and mess all this stuff up, but I think he got the idea for having the single file format because he worked on a project a while ago that ended up being the thing that Live USBs ended up being based on.
And the whole idea of having like a single image that you can boot something from, he was like, well having a single file be a container. He's really good at cross pollinating and taking ideas from one project and saying, well, how do those work? How would those work in this different project? And so Warewulf is a really good example of that. He's been thinking about containers for so long that it's like, well these images, they're not so different from the images that we're using within Apptainer. Why don't we see how those two ideas play with each other and how they work together?
Zane Hamilton:
Yeah, that's very cool. And then if you take it up and another layer, we started talking about Fuzzball. So the product CIQ is changing the way HPC is done, and it builds upon all of these other layers. So tell me a little bit about how it's built on top of those layers and what role they play.
Fuzzball [19:02]
Dave Godlove:
I'm still fairly new to Fuzzball which is both a drawback because I can't talk in depth about a whole bunch of things, but it's also it's a cool position to be in to come and talk about it because things are so new to me, and I've got this perspective of a new user, which I think is cool to share. So yeah, fuzzball is taking together all those different building blocks that we talked about previously, Rocky Linux, Warewulf and Apptainer. And it's leveraging all of them together to basically redo or change the way that people do HPC. So essentially the way that Fuzzball works is that you've got a little management cluster sitting there waiting to accept requests.
And you put together a little yaml file, which describes what you want your workflow to be. And once you've got that workflow, you submit the workflow to fuzzball, and instead of just being like a scheduler and just saying, oh, well where can I get these resources to run this job? And where can I do that? It actually, like in the case of the cloud running on AWS or GCP or something, it actually says, let me build the appropriate cluster for you. Let me build the appropriate compute environment for you and spin all this up where I can take your workflow and then run it the way it ought to be run. And so that's really different. That's really cool.
And that's incorporating ideas from, you know, container orchestration and taking those container orchestration ideas and then applying those back into the world of HPC. And so it goes out and it spins up whatever resources it is that you need, it uses Warewulf to provision the resources and get 'em up and running and get everything put together the way it ought to. As far as I understand it. And so once it gets everything put together for you, it runs your workflow, and then it can go ahead and tear down that cluster for you so that you avoid costs that you would normally have by having all these nodes sitting there waiting for a workflow to run on them. That's how it works in the cloud. And it can also work on a traditional cluster alongside of another batch scheduling system or as a replacement. I think that there's lots of different ways that it can be used, but yeah, so it's kind of like the glue that it's taking all these different pieces and putting them all together and making HPC 2.0 based on all these different building blocks.
Zane Hamilton:
Absolutely. I see that we do have a question that is coming in. There we go. How does CIQ's HPC stack improve on existing HPC? I like having you on here, Dave, because I think you have a very unique and interesting perspective outside of just being an admin. You're actually a scientist and a user. So I think it's a good question for you to answer because you see it from both sides.
Cloud in HPC [22:31]
Dave Godlove:
Yeah, I think that one of the biggest ways, one of the biggest benefits right off the bat is we see lots of different places increasingly are trying to figure out where does cloud fit into the HPC world? And not only are we seeing a lot of like HPC admin teams and also scientists trying to figure out the answer to this question, but we're also seeing like top-down pressure from administrative bodies saying, hey, we need to stop building these huge clusters on-prem and start leveraging these resources which already exist a little bit better. And that's a hard move. It's a hard set of questions to ask. It's a hard thing to do. And I think that fuzzball is really going to help to ease that transition.
The entire CIQ HPC stack is going to help to ease that transition because it's built with both on-prem and also cloud in mind. And so, once again, as an end user, as a scientist, you can create these workflows and you can put 'em together and you can submit them and they will work just the same. You can either have multiple different contexts within fuzzball, so you can switch between like your on-prem or your cloud instance, or we have plans in the future to work on a method of federation in which Fuzzball will actually look at your workflow and try to figure out where it will best run given the current pressures and given what you're requesting with your workflow and so on.
So that's one of the biggest things that I can see right off the bat. Another thing is really pushing everything into containers. So we're already seeing a trend, not just on the side of the users in building and running and pulling their own containers and running their own workflows through containers and stuff. But we're also seeing a trend on the side of the admins, the support scientists where when they are getting requests to install new software, increasingly they're looking to containers to make that installation procedure easier. So that's another place in which fuzzball is really, and this whole stack is really helpful. It's very container centric, and it's going to start accelerating that push that we're already seeing. And getting more folks to move in that direction and also easing that transition and making it easier to go in that direction with containers.
Zane Hamilton:
Sorry, I don't mean to interrupt, but from your perspective, a lot of the conversations that I'm having now, it comes back to there are fewer and fewer admins, there are more and more research scientists, and in the past, it seems like the research scientists were also admins. They had to play both roles to some degree. But there's really becoming a split from the people that I'm talking to is there's a lot more admin work with less admins and a lot more scientists that don't want to be admins. They just want to do their science. They just don't have the time, and they're being pressured to do that. And I think something like this would help or helps in that type of environment where you can do more with less, you don't have to have as much knowledge of the system and containerization makes it easier to deploy all those things. Is that what you were seeing as a scientist as well?
Dave Godlove:
Yeah, absolutely. Yeah, that's a really great, great point. So my background is at the National Institutes of Health, the NIH, we didn't really do introductions at the beginning of this, but I'm sure you can go to another webinar if you want to see.
So my background is at the NIH and I was not at the NIH for very long. I was at the NIH basically on and off for about four years, four or five years. And even during the period of time that I was there as a support scientist, we supported the intramural NIH community actually on the campus in Bethesda. So you had to be like there working as an NIH scientist to use the cluster. And so even while I was there, I think the growth of users went from something like a quarter or a third of the scientists on campus were using the Beowulf cluster. So when I left, it was more like two thirds or three quarters that were using.
Zane Hamilton:
Wow.
Dave Godlove:
So, yeah. So I think that just many, many more scientists are getting into HPC just as a normal tool that you have to be able to use to analyze your data. And that topic of admins versus scientists is one that's really near to my heart because I think that's very, very important. I mean, you cannot and you should not expect a biologist or somebody who is involved deeply with their scientific discipline to put on pause what they're doing and learn this entire new set of stuff, which is just all about computers and is all about computer science, and which might require a normal person to get like a four year undergraduate degree in computer science in order to understand it's just not, I think a lot of times as when we're admining systems, we think to ourselves, ah, these users, why can't these users be smarter?
You know? And that's a really negative mindset I think. We want to compartmentalize, we want scientists to be able to focus all of their efforts, all of their energies on their science, and to not have to worry, to a large extent about how their compute needs are going to be taken care of. And so yeah, I do think that this type of product, simplifying things the way that our stack is doing, is really helpful for fostering that dichotomy between those two different specializations. and I think that we need more of it. I think we really need to continue with this mindset that scientists can just be scientists. They don't have to be computer scientists on top of that. And we can enable that. We can help them to do that.
Zane Hamilton:
I think it's one of the great things about Fuzzball is the fact that it's using what's become the industry standard for configuration and management now is YAML. So being able to do a fairly simplistic YAML, easy to read, and then building on top of that a gooey so that you can have templated YAML files out that you can actually visualize and you can drag and drop things between is going to enable a scientist to be faster to getting to what they need and easier from an administrative perspective of I don't need to teach you how to go do all these things, it's fairly simplistic.
Training Users [29:39]
Dave Godlove:
Absolutely. Another thing too is when all of this containerization started to hit HPC, I think many of us had this idea that, oh man, this is just going to be for power users and they're going to get a lot out of it, but it's going to be tough because we're going to have to train all these users to build their own containers, which means, in a way, we're going to have to train them on how to be Linux admins, or at least experts in package managers and experts on how to compile code and stuff like that doing things like that in their containers. And then I think that many of us were surprised, at least me, it took me longer than I should have to see what was in front of me, is that most of the users didn't need to build their own containers.
Once that ecosystem really started rolling, and once scientific containers became really widespread, users just started to pull their containers down, right? And it's just like, I don't need to build my own container. I need an old version of Python, a Python 2.7 or something. I'll just go up and grab it. I need tensor flow. And a lot of scientists are like, I don't really care that this thing is not compiled with AVX instructions and it's not going to run. I just throw a little bit more compute at it and I'll get my job done and I'll be able to publish my paper. And so they just go up and grab this thing and pull it down and run it. And that's something that I didn't anticipate. And I think a lot of people didn't anticipate is that no, it wasn't that everybody who used containers was going to have to become some sort of a Linux expert or an application installation expert or anything like that. And actually the opposite actually empowered all these users to do just exactly what we're talking about, to focus on their science and not have to worry so much about how they're getting their analysis and their compute and stuff done.
Zane Hamilton:
Did you see as a part of the way the enterprise had gone and with containerization especially, and being able to do CI/CD type things, and that drove a lot of the containerization platforms out there as being able to integrate easily into CI/CD. Are you seeing CI/CD being pulled into the scientific realm, or was it already there? We just don't talk about it as much?
CIQ's Maturity [32:04]
Dave Godlove:
That's a good question. I don't think that CI/CD is mature within the scientific realm as it is like outside with the enterprise and the development environments that are out there. Scientists, for instance, I think have adopted to a great extent. I mean, I was in science, I started in science maybe around 2007, 2008. And they were much slower. I think the scientific community was much slower to adopt. Even things like versioning tools like GitHub and things of that nature and if you think about what versioning enables you to do and what's great about it, and then you think about what a scientist is doing, you can see why it took them a while to adopt those types of tools.
So one of the great things about GitHub is it allows you to collaborate. I mean, that's like GitHub and version software is all about collaboration, right? A lot of times scientists aren't collaborating very much. They're very small teams of people, maybe even just one or two individuals who are writing code and trying to get analysis done and the software packages that they're creating are not things that are going ultimately to be widely used by a large number of people. So so going back to the CI/CD question, it's the same kind of thing, you know, it it's useful to a scientist if the scientist has a lot of different environments that they're working on or if they are part of, some disciplines in science are more collaborative than others, obviously, or if they're part of a big collaborative team where lots of them are working together, but a lot of times it's just a few scientists working on something and so that might render the whole CI/CD kinds of workflows a little less useful than they would be in other cases.
Zane Hamilton:
More of a hindrance than a help.
Dave Godlove:
Could be. Yeah. It all depends.
Zane Hamilton:
Okay. That makes sense. And I think we have a couple more questions that have come in. This is always fun. Is Fuzzball a scheduler, a workflow tool or something else altogether?
Workflows [34:27]
Dave Godlove:
Good question. So it's not just, I guess maybe part of it is the something else answer to that question, right? It's not like just a scheduler, just a workflow tool because I talked about before how it actually intelligently provisions resources for you, right? depending on the environment that you're running in. So it's even a provisioner on top of it being these other things. But yeah, I mean, it can work as a scheduler. It can work as a workflow tool as well. So one of the great things about Fuzzball, it's really designed from the beginning for you to be able to run, I guess DAG's, directed acyclic graphs, basically jobs that depend on one another. It might depend on one another in scurry ways, but not in circles. So Fuzzball is designed to be very simple for producing workflows. Even really complicated workflows. And that's just part of the way that the YAML is put together and the way that you put your jobs together when you write the YAML. So I guess the answer would be it's a little bit of all these things and something else in addition to that.
Zane Hamilton:
I think one of the other things I know that you've been playing with a little bit lately is VDI. So being able to do some sort of VDI with Fuzzball, and it's a little bit different than what we traditionally think about from being able to get a desktop environment or some sort of gooey inside of it. You're not doing SSH and just forwarding X back, it's different.
Dave Godlove:
Yeah, that's another thing that I had been working on. I'm trying to perfect a little bit. But yeah, I mean there's ways just with traditional cloud or traditional HPC, I think it's a little easier, but there are ways to just forward reports back from the compute nodes that you're running on and be able to connect to those using Jupyter Notebooks. There are ways to visualize, to get a full on desktop if you want within a compute environment. And then to be able to provision out more nodes from there if you want to just work on there and develop so you're developing in the same environment that you're ultimately going to be running in. Yeah, there's lots and lots of different things. It's a very, very full featured tool with a great deal of potential.
Zane Hamilton:
Thank you. So next question from Mark. Does Fuzzball require Rocky Warewulf and Apptainer? It's an interesting question we get quite often, especially when it comes to the operating system. And to answer the operating system perspective, no, it doesn't require Rocky. It's just the one that we prefer because it's our product, but it will run on other variants of Linux out there, it should actually run on any newer one. I'm not going to say any, because someone will ask a question about something that's significantly older and it needs a more modern kernel, but Warewulf and Apptainer. Cloud based, I don't think it requires Warewulf in the cloud because you can actually just do API calls directly out to the AWS provisioner or the Google Provisioner. So I don't think it requires it, Apptainer on the other hand, Dave, want to dive into that one?
Dave Godlove:
Well, it requires something that we refer to as substrate and substrate is the thing underneath of it, which is providing the container runtime environment for everything to run. But it's not like you would go and install a bunch of dependencies and then you would go and install Fuzzball on top of that. Fuzzball comes as a pre-packaged entity that you just install that has with it substrate and all the other bits and pieces that it needs to run. So, no, the technical answer is no, but I mean, the actual gist, the real answer is it's going to need a container environment because everything runs in containers on Fuzzball. But that's not like a separate install. That's something that just comes with the package.
Zane Hamilton:
Perfect. Thank you. Yeah, that's kind of all the questions I had today. I don't know if there's anything else you want to add to, I mean, the stack is very interesting. It's very exciting. Every time I talk to people, there's a lot of excitement around it and the ease that it's going to bring, and then being able to do that real hybrid environment in HPC. I think one of the things we didn't touch on, Dave, that I'm getting a lot of people talking about is the ability to utilize cloud resources for things that they may not have. So an older HPC environment where they don't have some of the newest, latest and greatest hardware, mainly GPUs, but they're available in the cloud, they can still execute that job and be able to go get the GPU resource that they're looking for or want, but still have the traditional HPC environment. Is that something that you're seeing as well?
Dave Godlove:
Yeah, absolutely. I mean, having access to all this, all these different resources is just putting more power at the disposal of the end user. Anytime you give the end user more power, they're going to come up with different and interesting ways to use it.
Zane Hamilton:
That's great. So you had nothing to add, Dave, I'm going to go ahead and wrap it up. I just want to say that I really appreciate you guys watching today. If you have any questions or if you're interested in any of these pieces individually or the entire stack, please reach out to us and let us know. All great projects if you want to get involved, each one of them has their own website. We'll have links to them below, get involved, contribute back, ask questions. We'd love to talk to you, love to help you where we can. We really appreciate the time.
Dave Godlove:
Yeah, nice talking to you, Zane.
Zane Hamilton:
Absolutely. Thank you, Dave.