CIQ

Mastering HPC Infrastructure With CIQ Mountain

June 15, 2023

Join us as we unveil the latest features for CIQ Mountain. We will walk you through our HPC stack subscriptions for Rocky 8 and Rocky 9, builds of Warewulf and Apptainer for use with Rocky 8 and Rocky 9, and a sneak peek of Ascender.

Webinar Synopsis:

  • Mastering HPC Infrastructure

  • Mountain In The Past Month

  • Warewulf From A Mountain Perspective

  • Changing Nodes In A Cluster From Traditional HPC To Fuzzball

  • Automation Realm of Configuration Management in Rocky

  • Automation and Ecosystem Management With Using Ansible

  • Workflow Template

  • Generating Reports

  • Sharing Playbooks

  • Investing Into the Ansible AWX Community

  • Audience QA

  • Onprem vs Cloud

  • Compare to Other Tech Companies

  • Ironic or Warewulf

Speakers:

  • Zane Hamilton, Vice President of Sales Engineering, CIQ

  • Rose Stein, Sales Operations Administrator, CIQ

  • Jonathon Anderson, Senior HPC System Engineer, CIQ

  • Michael Ford, Leader, Sales Engineering, CIQ


Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.

Full Webinar Transcript:

Zane Hamilton:

Good morning, good afternoon, and good evening wherever you are. Thank you for joining. At CIQ we're focused on powering the next generation of software infrastructure, leveraging the capabilities of Cloud, Hyperscale and HPC. From research to the enterprise, our customers rely on us for the ultimate Rocky Linux, Warewulf and Apptainer support escalation. We provide deep development capabilities and solutions, all delivered in the collaborative spirit of open source. Hello everyone. Good morning, Rose.

Rose Stein:

Hello. Good morning, Zane. Glad that you're back, man. I missed you last week.

Zane Hamilton:

Sorry about that. I was actually on the roads. That was exciting. Got to talk to some exciting customers about some exciting things, part of which we're actually going to talk about today. So it's a good day.

Rose Stein:

It is a good day.

Zane Hamilton:

Little sneak peek, and we have some other things that we've been doing with Mountain, and we would like to talk about. We bring everybody here.

Rose Stein:

We are going to talk a lot about this. Super exciting, what's up? Yeah.

Zane Hamilton:

Hey, everyone.

Justin Burdine:

Hello. Hello.

Zane Hamilton:

Jonathon. Michael.

Jonathon Anderson:

Mr. Zane.

Zane Hamilton:

How are you?

Michael Ford:

Hey, what's going on?

Zane Hamilton:

Changing rooms again, Jonathon.

Jonathon Anderson:

Every day.

Zane Hamilton:

And we're moving. Love it. So, Rose, what are we talking about today?

Mastering HPC Infrastructure [6:16]

Rose Stein:

Okay, you guys, today we are talking about mastering HPC infrastructure. We're going to be unpacking the latest offerings from us at CIQ with our coolest, bestest ever. I added that part in. A product called Mountain. What's really cool is that we released Mountain last month, and then there's so many things that we can do with it. Every couple few weeks we're coming back in and saying, oh yeah and we're going to add this. Oh yeah, and we can do this. Today we're going to be talking about what our next offering is going to be. I'm going to just keep it a little bit of a secret surprise for right now while we do a couple of other introductions. Mr. Justin, you want to say hello to everybody and tell us who you are?

Justin Burdine:

Yeah, absolutely. Justin Burdine, been at CIQ for about a year. prior to that I was at Red Hat doing a lot of stuff. I've been in technology all my life since I was a 10 year old with an Apple two computer. At CIQ I work with the solution engineering team. So we do a lot of conversations with customers, talking about all the technical aspects of our products, how it might help them, how we should basically provide, showing them value that the CIQ products offer. Glad to be here guys.

Rose Stein:

Yeah, love it. Thanks for being here. Michael, who are you?

Michael Ford:

Hey, good afternoon, everyone, I'm from Chicago. My name is Michael Ford. I'm also a Director of Solutions Engineering alongside Justin. I like to think of myself as Justin, but shorter. So, everything he can do, I can do, but not quite as high.

Zane Hamilton:

Maybe a little, maybe a little shorter.

Rose Stein:

I'm not going to be able to get that out of my mind because Justin, I've actually seen in person, we went to SC22 together last year, and he is tall. This is a tall man. You cannot tell by the little square that he is in right now, but he is, he is a lot of man. So, Michael, I have not seen you in person, so I don't know how short you actually are. You look about the same right here to me in this perfect little square.

Michael Ford:

If I don't stand up, if we don't have to stand up, I'll just let you say that.

Zane Hamilton:

Michael's not short, it's just that Justin is that tall.

Michael Ford:

Exactly.

Rose Stein:

It is. It's really weird to build relationships with people in this little square box place and then meet them in person. It is, it's a little bit jarring, but super fun. So, awesome. Thank you for that, Michael. I'll have this anticipation of someday we'll all be in the same room. Mr. Jonathon, you've been on the webinars lots with us, but tell us who you are.

Jonathon Anderson:

Yeah, my name's Jonathon, and I'm on the solutions architect team here at CIQ. Fair bit of background in HPC and Academic Research Computing. You know as the solutions architects, we just try to find what we have in the CIQ wheelhouse and in the periphery and how we can put it together to make things better for our customers.

Zane Hamilton:

Jonathon is also not short.

Rose Stein:

That is true.

Zane Hamilton:

It's a tall crowd.

Rose Stein:

You too Zane.

Zane Hamilton:

Camera in the background.

Rose Stein:

You too. The three of you, Tower of Power.

Mountain In The Past Month [9:36]

Zane Hamilton:

Jonathon, on the HPC side, when we've been talking about Mountain for a long time, and it kind of started off as a repo. Being able to help Patch Linux, but I think it's taken on a very different life as we've gotten into this and what else can we do especially in HPC. Where can we add value with Mountain? I think that's what you're here to talk to us about today. What have we put in Mountain in the last month for HPC?

Jonathon Anderson:

Yeah, absolutely. Along with Rocky Linux and Fuzzball, CIQ also supports the Apptainer container runtime and the Warewulf Cluster Management system. We haven't really had a place that is kind of the official CIQ designated place to get those. We've been pointing people either to Epel for Apptainer or Warewulf to spend an open HPC, but we wanted something that we had a little bit more control over. We've put Apptainer and Warewulf into Mountain. We have builds of that for Rocky 8 and Rocky 9. I think that's the first real build of Warewulf for Rocky 9 and Enterprise Linux 9 that's been out. I mean, we've been building it internally that way just on local machines in the past. This is the first time that it's been published that way and it's not enabled yet. We also need to enable ARM builds for that because I'm testing it on ARM platform, it's working there, but again, those packages aren't available, so that'll be where you can get that.

We also have already seen benefits to that. One of our customers had an experience. They discovered a bug in a corner case in Warewulf. Because we had this release mechanism that we had access to and full control over, we were able to take the patch that's already submitted upstream. It's going through the community process. With our customer, we were able to roll that patch into a release and get it out to them by a Mountain within just a few days. As the solution architect trying to get that fix out for them, it was really great to have that release mechanism and that subscription to be able to push that out through. So we're looking forward to that. We're also doing a bit there with Warewulf node images. So instead of just the, or in addition to the RPM packages, let's say that you install Warewulf from Mountain is also a container registry.

It can do Apptainer containers or OCI style containers. If you haven't used Warewulf yet, Warewulf compute node images, the images that get booted on the compute nodes in the cluster are, well, they can be sourced from container images. We are publishing a common base set of node images that we'll support through Mountain. Right now it's a fairly basic Rocky 8 and Rocky 9 image that you can get downloaded and imported directly into Warewulf from Mountain. The idea there is as we keep those updated, as we add things to them, you can subscribe to those images and get updates from a supported, trusted source without having to build that image yourself, without having to worry about updates that might need to just kind of let's say a new version of Rocky comes out, we would still build that image and update it, and you could pull the next one down.

We also are intending to, these aren't in Mountain yet, but we're intending to add to that catalog of node images to target different hardware platforms. What's up there right now is relatively generic, but we anticipate node images that are targeted at let's say NVIDIA GPU resources or AMD GPU resources or Melanox or Cornell Fabric Interconnects, that kind of thing. And baking all of that hardware support directly into the image. Some of that we might be able to have in a single image, but there's also benefits to having smaller images, reducing your boot time, that kind of thing. It'll probably be released as a catalog of images. And then you select what support you need in your image and pull that one down for your cluster.

Warewulf From A Mountain Perspective [13:27]

Zane Hamilton:

At a high level. Jonathon, could you walk me through real quick, just for those who are not as familiar with Warewulf, if I had Warewulf set up on a head node, what does that look like from a Mountain perspective? How would that go? What would I do to actually pull one of those images into and deploy on my compute notes?

Jonathon Anderson:

Absolutely. I'll give a little bit of background first for the RPM side of it. If you're installing Warewulf, you don't have that yet, there's a little CIQ tool that you install, and then you get an access key out of Mountain that authorizes you to pull packages down and is associated with what repositories you have access to. You plug that access key into the CIQ tool. It adds repositories to Yum. Then you can just use your regular DNF package tools to pull those packages down. On the container side, you don't even need to go through that registration process. You just use that access key as a password when you're doing either a podman pull or an Apptainer pull, or in this case a Warewulf control. At wwctl container import. You can just give the Docker URL to that wwctl container import command, give it a name for what you want that container to be called in your local installation, and you just hand that Mountain access key to it, and download it directly into Warewulf.

Zane Hamilton:

So making it very easy to deploy without having to actually go define your own images.

Jonathon Anderson:

Yep, absolutely. Right now, Warewulf doesn't have support for, you can force import over the top of an existing one, but we're working with the community to improve that. There's a desire not just within CIQ, but elsewhere in the community as well, to be able to mark containers that you've pulled into Warewulf as read only from a local management perspective, so that the system can know that you aren't modifying it locally and can just subscribe to the upstream container and pull it down as automatic updates. That's not in Warewulf yet, we're on 4.4 right now. I could imagine that being in 4.5, maybe 4.6 as an upcoming feature.

Zane Hamilton:

I got another question, Jonathon, from a compute node perspective. A lot of times you hear that everybody wants everything in the cluster to be the same, but I also have heard from several different people that different types of workloads may have different requirements. So not all the compute nodes are the same. Warewulf being what it is, gives you the ability to change back and forth like Rocky 8 and Rocky 9. You could easily switch back and forth between the two as needed, right?

Jonathon Anderson:

Absolutely. Yeah. It's relatively easy to group nodes in Warewulf, and then have what Warewulf calls "profiles", that would apply a certain configuration in your example of a Rocky 8 based configuration or a Rocky 9 based configuration. Moving nodes between those profiles is a single command and then rebooting the nodes and they come up in the other profile. For the most part, in my clusters anyway, the node image is more about targeting the queuing system that you're using, at least the scheduler, that kind of thing. But really the hardware and having exactly the software you need to expose the hardware to the running application that's on that node that's running. There might be, like you said, Rocky 8 and Rocky 9 level stuff, like whole OS level things. For precise application tuning, I personally would put that more in the application side of the conversation where we can do that with application containers, let's say. 

Rather than optimizing and worrying about what's on the node image, keeping that node image really minimal. In a traditional HPC environment, you have a lot of tools-let's say all of your MPI, all of the libraries that you need that might be on the node. We're moving down a path where that might be in the container, or maybe it's on a shared file system, which is also fairly typical. To support that, one of the other things that we're doing and trying to push into Mountain are actual research codes that we're going to be putting into Mountain subscriptions. You'd be able to subscribe to get the latest updates for, and have these codes specifically built to run well on Rocky. One of the things we've noticed that we keep forgetting, even while we were planning this out, we forgot how much of these dependencies that are useful in scientific and research computing are already present in Rocky, all the way to Open MPI and MPICH are there.

We were talking about how much of a pain it would be to have to build and supply the entire Python Stack, the Sci-Fi Stack, all of that's already there. What we're talking about now is enumerating applications that we think our customers would find really useful to be able to have the latest version of at any time, pulling those down and building them as much as we can on a pure Rocky Linux base. Then having those subscriptions in Mountain so that you can know, I can just subscribe to that pull down like OpenFOAM, let's say. It will be built against the libraries that are already in Rocky. You didn't have to build it, you didn't have to build all of those dependencies and just run it immediately as soon as you are on your cluster or on your workstation, wherever you have a Rocky Linux system.

Zane Hamilton:

The other thing that I've seen you do before is actually from a CIQ perspective, being able to change nodes in a cluster from traditional HPC to Fuzzball.

Changing Nodes In A Cluster From Traditional HPC To Fuzzball [18:46]

Jonathon Anderson:

That's something that in general, we would do with the Warewulf profile system. You would have a profile that targets, this is a compute node that runs, let's say Slurm in a traditional environment. Or Fuzzball Substrate in what we would call an HPC 2.0 environment. That would bring in not just a new node image, but a set of overlays that configure that environment to connect to your Fuzzball cluster. All of that is grouped together as a single node profile unit so that you can swap nodes back and forth. We have other things that we're working on there to cross those streams and combine them, but I don't think we're quite ready to talk about it yet. We're hoping to make that transition easier and easier over time, both as an upgrade or a migration path, but maybe also as a cooperative path.

Zane Hamilton:

That's fantastic. Thank you, Jonathon. I think at some point we're going to come back and actually kind of demo some of this. Today we're just trying to talk about high level, what we're doing, what we've been working on. One of the other things that we've been doing and working on is Ascender, and I think some people have heard of it, maybe not, but I'm going to go ahead and let Justin kind of high level tell us what Ascender is.

Automation Realm of Configuration Management in Rocky [20:02]

Justin Burdine:

Yeah. Well, first of all, very excited about this. This Ascender is essentially CIQ's first foray into getting into the automation realm of configuration management that kind of stuff. I'm excited about this because it's based on Ansible, which is what I've done for the past eight years now. I'm very excited that CIQ has made the decision to kind of embrace this. We see Ansible as the defacto standard for automation. This is us diving into that and building up, essentially building out a product offering at CIQ that will essentially stand up an ecosystem around Rocky. We're really excited about this because as we've had conversations with customers, almost always, they're very excited to hear that Rocky's growing, it's very vibrant, we have a great community. It's even more exciting that we're diving into this and now getting into automation and building up that ecosystem around the operating system. Very excited about it.

Zane Hamilton:

I think Michael actually has some things to show us. A tease of what this thing looks like and high level what we can do. A lot of this came out of the need for not only patching, but sophisticated patching. Not that Michael's going to do a bunch of sophisticated patching, but there are a lot of possible ideas of what you can do with this.

Michael Ford:

For sure. Let me make sure I have the right window. Okay. Let me know when you can see my screen.

Zane Hamilton:

Got it.

Automation and Ecosystem Management With Using Ansible [21:39]

Michael Ford:

Okay, great. As Justin said, this is kind of our first foray into doing more automation and ecosystem management with using Ansible against our Rocky Linux. What I really wanted to show is a driver of three basic things that we can do with Ascender. To start with, this is the GUI Forest Center at this point in time. If you ever played with VX or you might be familiar with this interface, one of the things that we look at when we talk about managing our Rocky Fleet with Ascender is being able to do more sophisticated patching, using what I call, Leaf Village access in order to do things at a proportion level. Justin might need different access from me, who might need different access from Zane and so on and so forth.

Being able to log so we can see exactly who did what job, when was it done, did it succeed, did it fail? All that stuff is kind of built into a center here. We can see just a quick overview of the number of jobs that I've run over the last week or so. How many hosts that I'm managing. It's not a great deal of hosts for this particular demonstration. All the resources that I can manage here from an RBAC perspective and administration of Ascender as well. Not to go into too much more detail for the sake of time. The only thing that I'll say here before I get into things is, this is not a place where we're going to write our Ansible playbook, but it's really a place to govern how those playbooks are run to govern how automation is done.

Just a couple things that I want to show and just to kind of show what's happening in the backend, this is my GCP account. I've just got a couple of businesses here. They're both running Rocky 8. The main difference here between the two of these is that they have different labels, one's labeled as dev, one's labeled as prod, just to kind of show in reality for an enterprise, maybe that's what you want. You might have some things that you're testing out on your development servers. You might want to do some automation there first before you're running things in production, which is totally fine. I've just got two instances for now. The other thing I've got here as well is the SAS instances of Mountain. So just to show where my subscriptions are coming from for the purposes of running Ascender.

This is my Ascender instance. I'm going to start off by just showing here that I have an inventory that I'm going to build with my GCP servers. I've already done this. I can run it again very briefly, but just if you want to do some dynamic inventory management, I'm actually pulling the instances based on filters that I set into Ascender. The actual playbooks that I'm running are living in some flavor of source control, which in my case is GitHub. Just very quickly to show, this is my GitHub repository. It's private. I have a source credential that I'm using to actually ingest into Ascender. The last thing I'll likely actually show to actually run these things is my templates. At the end of the day, these are running individual playbooks, but they're pre-populated with a couple things that we'll need in order for those playbook runs.

For example, if we are registering our systems to Mountain, I have to know my machine credentials to log into each of those two servers. That's already built in here. It's encrypted. I have a key to log into Mountain to register these machines with Mountain. But it's also encrypted, so I'm using this Vault password, if you're familiar with Ansible Vault, using Ansible Vault to decrypt that at runtime and register those systems with Mountain. If I wanted to change things like the Verbosity, what playbook I'm running, what inventory I'm choosing, that's all going to be here. I can pre-populate those things.

Workflow Template [25:37]

Those are individual job templates. I'm actually going to run this first so we can see. I'm actually going to run what's called a workflow template here. This is going to basically take all these things, which is registering my systems to Mountain. Choose a particular subscription within Mountain to subscribe my machines to, and I believe the last one is actually confirming the installed packages, the installed subscriptions for Rocky Linux on these machines. I could run all these individually, but what I'm going to do is I'm going to run an actual workflow template that strings those things together. I'm using a survey in order to select, hey, what servers don't want to configure. Maybe I want to do just the development for now. These are Rocky 8 servers, so even though I chose Kernel 9 and Kernel 8 to subscribe to, I'm just going to choose Kernel 8 for this particular demonstration. Here I can actually see the variables in my playbooks that I'm actually populating based on those survey inputs. This is going to take a second to run, but at the end of the day, what's happening here is each of these individual playbooks are going to run.

One thing I didn't change was, actually pulling from source control before it starts running. But this playbook is going to run first, and then these lines actually show this next playbook's not going to run unless this previous one is successful, and so on and so forth. If we wanted to do some more error management, maybe if this failed, maybe I want to fork that off to something else. Maybe we send an error message via email to the parties in question or a host of other things that we might want to do. That's that. As this runs, let me pause there and ask if anyone within the webinar has any questions. We'll go through this once I'm done.

Justin Burdine:

Michael, I was just going to caveat on say that a lot of customers I see using this workflow, that decision of did it fail, did it succeed? That's a great way to clean up. Especially when you're provisioning stuff in the Cloud. Anybody who's used any of the Clouds, they're fantastic, they're wonderful, but my goodness, trying to clean up after yourself if you have a partial installer or partial build. Having those playbooks that go in and clean up those systems that maybe didn't build up or have left some artifacts around is, is really helpful when you get a failure.

Michael Ford:

Absolutely. That's one of the things that I love about this, is there's a little bit of work to do up front if you're writing playbooks and doing all this stuff once you're done. You can see here this took a grand total of, not even 50 seconds and we've done all these things. If I want to do some more cleanup I could when all that's done. Very quickly, just to close this loop here for this particular demonstration, I can actually see each individual playbook and see how it ran. I'm just subscribing to the one system that I talked about, just the one system to Mountain, I can see all that being done. Indicate that a change has taken place. I'm actually subscribing to Kernel 8. I can see that here in this particular playbook.

There I go. Successfully enabled. Let's be here. Then lastly, I'm actually confirming the Mountain subscriptions that I'm subscribed to. Next thing, I'm going to thank Jonathon for helping me out with this yesterday. Just because I was not familiar with the proper commands, but we can see all that here as well. That's for just the development server. One more time, just to close that loop here to show that we're only running this on one server and not both. What I'm actually going to do is run just that last playbook again. If I want to confirm all systems or all subscriptions for Rocky Linux, I'm actually going to run that job by itself and run it against all servers. For the production server it should fail because I've not even registered that system with Mountain yet. I actually made it so that the playbook ignores errors so it'll still continue, but just want to show that we only ran it on a single system and not both. I'll let that run, it shouldn't take too long. Yep. For the production server, we're not registered to Mountain at all yet, so there's nothing there. For the other server we can see here that it's in the same state as before. The same subscription shows up for the other one, there's nothing. Let me stop there. I know that we don't have a ton of time left, but Justin, I don't know if you have any comments on this or Zane, or if anyone has any questions?

Generating Reports [30:26]

Zane Hamilton:

I do, especially on this part of it. I've seen some of the stuff you've done in the past, being able to run a playbook like this and pull information and actually shoot it over to an S3 bucket as an HTML page using Jinja2, and actually build a report. Somebody can just go build a report based on running this automated. You could actually give an auditor the access to just go look at a site and see where everything is. There's a lot of power in this from not just actually making changes, but from reporting, making sure that things are what they should be.

Michael Ford:

It's funny, we were talking about that with the client earlier today just asking, what can you do in order to generate reports for this? That's exactly what I brought up. Just whether it's an S3 bucket, that's what I like to do. If you want to send an email, if you want to spit it out to a pdf, if you want to spit it out to your favorite logging aggregator, you can do all that stuff. The data's there, however you want to present it, this can do it.

Jonathon Anderson:

One of my favorite things about the adoption of configuration management, like this especially when you have this central place, a central repository where you're throwing all this code is that it starts a common language of collaboration between different sys-admin's, let's say, different people that have to do these repetitive tasks. Now we can benefit from each other's work. Michael, I was working on the same thing while you were asking about it in Slack. It was a strange coincidence that I was actually having to do exactly what you were asking. Now all I can think about is I want to see your playbooks so that we can compare notes and maybe it looks like you might be doing it in a slightly better way than I figured out how to do it. We're working on a laboratory kind of an internal testing and development environment on the CIQ side. We're automating a bunch of these same things. It'll be good to benefit from your experience on it.

Michael Ford:

This is the last comment that I'll make. I love that you brought that up just because we're all different folks with different experience and honestly, maybe the way I'm doing it's better, maybe it's not, or maybe the playbooks that Justin might run might be more efficient in its own way. I love that we can collaborate and bounce ideas off of each other and get even better automation. Couldn't agree more.

Justin Burdine:

One of the things I always called out when I was talking about this over the years was, when I grew up in IT, everybody I worked with had spent so much time on their Bash Shells or their Perl Scripts and things like that. Because they invested so much, they never wanted to share it. They were kind of building up these little empires of things just because that made them valuable to the company. What I loved about this is it's so easy to write, it's so easy to put playbooks together. I mean, the fact that I can put them together, that should be saying enough. It's great because it's so easy to build, it almost fosters sharing among teams.

It's like, oh yeah, check this out. It took me five minutes to put that together. Then you start adding things like ChatGPT to this type of conversation and it makes it even easier. I've loved that. I've seen it break down a lot of silos at a lot of different companies because traditionally we do this, that's all we do, and no one can touch it. Well, if you can give people access to do things on your behalf so you're not having to do that, you give them a controlled access that will work through config management. All of a sudden, you're starting to make your job a lot easier.

Zane Hamilton:

I think you hurt Greg Kay's feelings when you said Perl.

Justin Burdine:

I know, sorry.

Jonathon Anderson:

Mentioning Perl hurts anyone's feelings.

Justin Burdine:

Hey, I lived it.

Sharing Playbooks [34:03]

Rose Stein:

Then is sharing a playbook something that you can do, or does each admin have to figure out their own flow?

Michael Ford:

There's two different things here. You have Ascender, which is actually governing how playbooks are run, but usually within an organization they'll have some flavor of source control. I know that some companies use Bitbucket, some people use GitLab Enterprise, GitHub, and a host of other things. I think I use the word consumption a lot. The people that are consuming automation might be totally different for the people that are actually authoring and creating automation. That's really more how you're sharing playbooks, sharing in a sense that maybe Justin, Zane, Jonathon and I are working on playbooks together. We have access to Bitbucket in order to offer those things, but then we can determine maybe the greater enterprise, maybe within CIQ there's a number of folks that are greater than the four of us that need to actually consume that automation.

In a way we're sharing it in a sense that they can actually run the automation, maybe not sharing it in a way that they can write it. But the other part of that too is maybe if my playbooks are public within GitHub. I'm just thinking about this out loud. There's a lot of times where I very rarely write playbooks from scratch because if you Google something, it's part of the power of Ansible, there's something out there that you can use as a start off point, and I'm way better editing something versus starting from scratch. Same thing, if I'm writing an essay, I'm the same way. It's a long answer to your question, but yes, you can share content.

Justin Burdine:

I always say when I am talking to customers, if you're writing something from scratch, you're probably not doing it right. There's so much content out there now, especially now. I mean it's been years and years and years and so there's a lot of stuff out there that you can immediately jump from. Like I said, ChatGPT just started asking it, how would you do X, Y, or Z? It is amazing how close it can get. It's not perfect and you never want to just run that production. It can certainly give you some really good insights into where to go.

Zane Hamilton:

Jonathon, I think you had something else you wanted to bring up?

Jonathon Anderson:

One of the things that to go back to the Mountain part of the conversation real quick I neglected to mention that one thing that we've put in. It's not just immediately, but that it might have been easy to miss. We have subscriptions in Mountain for the mainline stable Kernels, so the upstream Kernels. One of the things that's a benefit in many respects on Rocky is the stable Kernel that's there. But some people want additional hardware support, more modern features in the upstream Kernel that you would get at kernel.org. Those are packages that we provide through Mountain. But I also just wanted to throw out to the community of people who might be watching this, who might be interested in a Mountain subscription, especially around scientific software. We're interested to hear from them, hear from you out there. What software would be useful for you there? You can either leave a comment on the video or send us an email. I think it's like info@ciq.com. Is that a place where people can get in touch with us? We'd love to hear from you for which applications are, let's say, the most troublesome for you to stand up and that we could provide some help putting those pre-built ready to go on Rocky into a subscription for you.

Zane Hamilton:

Thank you Jonathon. Now back to Justin.

Investing Into the Ansible AWX Community [37:38]

Justin Burdine:

I'm glad we touched base on that. I think the thing I'd want to wrap up with as we could kind of bring this to a close, as you saw, what you see there is us investing into the Ansible AWX community. Obviously, that is Ascender is based on AWX. That's our commitment as CIQ as really investing in this, we believe in it. A lot of people look at this and think, okay, well cool, that's AWX. What is the real difference here? You can go get that now for free, that's fantastic. As I will always say, one of the biggest reasons I came to CIQ is because I think we're on the forefront of re-imagining how we do open source sales, how we do open source support.

I guess the things I wanted to call out, Zane, were really calling out our support. For those who are looking for an AWX solution, or an automation solution based on Ansible. Ascender is here and we offer support on that platform. The support itself, just to highlight what we do here at CIQ, is that we essentially decided early on that we weren't going to do a tier one, tier two support model. We basically wanted to get people from having a problem, and needing help with it to a solution as fast as possible. Our support intentionally was built so that you're getting on the phone with somebody as quickly as possible to help you solve that problem.

It's a big differential, especially when you're looking at other vendors that are out there. We're young, we're hungry, we want to really show this new model will work. That's our support model, so that runs the gamut on all of our products. The other thing I want to point out is how we're looking at Ascender differently. One of the biggest problems I had in years past was when you start talking about automation and doing automation at an enterprise level or an HPC level. On a bigger scale than just a few hundred machines, you start scaling into pricing that really honestly makes it just unattainable.You look at the value you're getting from automating, at some point that scale gets out of balance.

We thought, well, we're already doing a really great job of providing per admin pricing for our Rocky products and for the other things that we're doing. Why don't we try to apply that to automation? What this is really saying is like if you have a couple of hundred machines, maybe buying per node makes sense from a certain standpoint, but what we're really selling here is per person. It means that the people who need to access a sender can log in, you license those people up and then they can have an unlimited amount of nodes. We're really excited about that because we just heard so many people over the years saying, I want to do automation, but I can't do it at the scale I want, and we think it is really going to open up the doors for that.

Audience QA [40:32]

Zane Hamilton:

That's a great point. Thank you for bringing that up. I think now we are a little bit past the time, but let's do Q&A. We have a lot of questions, not a lot. We have several questions.

Rose Stein:

It's time for Q&A, can we have a song for it as well, a little dance? There was a time that I taught high school and I allowed my students to do their final presentation right at the end of the year. You have a big test, you have a presentation. I said, if you want to do an interpretive dance as your final presentation, you can do that.

Zane Hamilton:

You found high school kids that would do it?

Rose Stein:

No. Not One. I probably wouldn't have done it then either, but I would sure do it now. I mean, obviously I just did it right here. Back to the question. Thank you very much, Greg. This can be expanded beyond just servers, right? Network, security cloud, et cetera.

Michael Ford:

Yes. I think we can all answer that question, but yeah. For the purpose of this discussion.

Justin Burdine:

I think what makes me really excited is exactly that. I think we are really good in the Linux space. We're very good in the HPC space. This allows us to have conversations that are above and beyond that. Evening expanding into automating and configuring Windows systems, network and security and cloud and the list goes on. When you're talking about automation with Ansible, I've always said, really it's about what can you dream up? I've always described it as being a box of Legos without a picture. It's like it can do whatever you want. You got a whole bag of Legos there. Let's dream and build. That's what's really exciting to me. That's why I've loved doing this over the years to really dive in and explore with customers and go do exotic automation.

Zane Hamilton:

Absolutely, if you can SSH to it, you can automate it.

Rose Stein:

All right, David, hello. Thanks for joining us. That's Ascender, correct?

Justin Burdine:

It is.

Onprem vs Cloud [42:42]

Rose Stein:

Maybe this is a silly question, because I'm totally new to automation, so I'm just learning this stuff guys, I appreciate your patience with me. I know that a lot of customers when we're talking about some of our other products and services, they're like there's a difference between on-prem and cloud. Is there a difference with an automation platform or an automation tool when you're talking about on-prem versus cloud?

Zane Hamilton:

It depends. There's two different ways to look at that. From a, where the actual tool itself is deployed, we can deploy it in the cloud, we can deploy it on-prem, that doesn't matter. Then from the automation side of it, what can you automate on-prem or cloud? The answer is yes. You can have a deployment on-prem that can deploy stuff into the cloud or automate in the cloud and on-prem, or you can do the vice versa. You can have your Ascender in a cloud and actually control your network inside or in assets inside of your on-prem environment as long as you have connectivity that allows that type of thing to come in. We have customers that do that both ways.

Compare to Other Tech Companies [43:48]

Rose Stein:

Awesome, thank you for that. We did have another question that popped up, bring it back. Arthur, hey, thanks for joining us. How would this compare or contrast relative to tech like Terraform from HashiCorp?

Zane Hamilton:

Michael? I feel like we've answered this one a couple times, so I'm going to let you have it.

Michael Ford:

I'll say this, I have a philosophical thing. I've heard this from other clients too where, they're both great. I obviously have a lot of love for Ansible and I think Ansible Is capable of doing everything. If you have clients that choose Terraform, usually it's because they might want to use it for substantiation of infrastructure because it saves state, which is great. What I will say is that if you're looking at configuration and also setting things up, you can do that with Ansible 2. I've seen a lot of customers in the past that use nothing but Ansible and that's great. There's also nothing stopping clients from choosing both because maybe a customer thinks Ansible is great for certain things like Invigoration and thinks Terraform is great for instantiation of resources and that's totally fine. You're not forced to make a binary choice. That's my own 2 cents as far as philosophy goes.

Jonathon Anderson:

That's great. Even before Arthur came on, I was going to ask exactly the same question. I have never personally used Ansible to stand up resources to initialize instances in the cloud. I've always used something like Terraform or Pulumi for that. At a certain point you cross a threshold and you need a tool like Ansible to do actual configuration management and orchestration of the systems running on those nodes.

Zane Hamilton:

There are a lot of customers, a lot of people that we've talked to over the years that do just that they will either have Terraform, called Ansible. Ansible, called Terraform, however you want to go about it. Whatever works for you or do Ansible for all of it. Justin?

Justin Burdine:

I think the only thing I was going to add is that I don't know if I've got a percentage, but it seemed like it was a large amount of customers back in the day that were doing both. That I think seems to be a natural state just because you get the advantages of both worlds. That's the only thing I wanted to add.

Michael Ford:

Just to take it home. If someone's so inclined, and I've done this on a number of occasions, there are Terraform models too. Exactly what Zane said, calling Terraform for Ansible, I've done that more times than I can count. If that's something a client chooses to do, then they can do that as well.

Rose Stein:

I'm going to throw a curveball here guys. Can you get mad at me later?

Justin Burdine:

She's going off script.

Zane Hamilton:

I've got my Slack window open here already.

Michael Ford:

Off the reservation.

Rose Stein:

Ascender, that is a new product of ours. We just showed the beautiful viewing, the different capabilities. We're talking about Mountain, eventually it's going to be accessible via Mountain. We talked about cloud and on-prem, and all the different things that you can do. It's very exciting. I know that we're going to get this question, so might as well just answer it. Is that going to be open sourced?

Zane Hamilton:

AWX Ascender actually is open sourced today. It's Ansible AWX. We built off of the open source piece so that the answer is, it already is.

Rose Stein:

Cool. If people are using that and they're like, Hey, I actually do need some support. They can call us and make that beautiful relationship there. It's similar to the other open source projects that we support?

Zane Hamilton:

Yes, absolutely. Open source first, Rose. Open source first.

Rose Stein:

Open source first.

Zane Hamilton:

Another question that popped in.

Ironic or Warewulf [48:00]

Rose Stein:

Yeah, more questions. Peter, hello. Thanks for coming back, we saw you last week. Ironic or Warewulf to deploy and manage bare metal and environment in HPC?

Jonathon Anderson:

There was a point in my past where I joined a new team and we were looking at it like an old, old version of a fork of Warewulf that the new one's so much better. That we were looking to move off of. We were looking for alternative platforms that might have good upstream support. One of the things I was looking at was OpenStack Ironic. To be fair, it was very new. I don't even know if it was Beta yet at the time. I was really excited by it and I thought this is great. We managed to get a little five node cluster going, but OpenStack is just such a big deal to run and manage. It is so easy for it to go wrong and so hard to keep it on the rails.

We had another guy that came in and we wanted to do something in that space. It's a University, we did a little project on the side, but it was a guy's like entire summer just trying to get an OpenStack environment running and then it's a big pile of state that you have to care about. If things go wrong, then it can be a pain to make it go right again. Especially if you're not in it day in, day out. What I can say about Warewulf, especially Warewulf 4, it is so simple, it is so streamlined. I wouldn't want to run an HPC cluster with anything else. As long as I'm with it at CIQ, if there's ever a day after CIQ I tell you what, I will still be running Warewulf on my HPC clusters because it is so good right now. It's exactly the right way, the right level of abstraction, the right level of toolbox versus doing it for you that that it should be.

Zane Hamilton:

That's interesting. OpenStack's making a comeback in a big way. It really is, I hear that a lot. It is a full-time job. Not just one, it's usually an entire team of people just to set it up and keep it running.

Jonathon Anderson:

One of the things we talk about at CIQ a lot is trying to bridge the enterprise space with the academic space. There's a divergence in the past. They've been operating in such distinct communities for a long time and trying to bring some of that knowledge together, cross pollination together. OpenStack is like a whole third community where you have a whole other group of people. If you just took everything that you might have to know to run Google's entire computer infrastructure or Amazon's entire compute infrastructure, all the secret knowledge they have is just an open source version of that. But you need a team, you need a whole group of people to do it. And it's cool, but I don't have time for that.

Zane Hamilton:

Agreed. And yes, OpenStack is a box of moving targets. It's a great statement. Thank you for that.

Rose Stein:

That was awesome. All right you guys, this is like really great information. Thank you everyone who is watching. Thanks for showing up. Thanks for being here. Please reach out to us. You can go to our website, info@ciq.com. Submit a form, say hello, we would love to chat with you. Do you guys have a final remark? Anything that you're just like, wait, I just want to say this one last thing before we go.

Jonathon Anderson:

Tell us what software you want in Mountain. We'll be excited to hear it.

Rose Stein:

What do you mean? What software do they want in Mountain?

Jonathon Anderson:

We want to build the scientific software and provide it through Mountain and we would love to know from our viewers what software they would love to see in it.

Rose Stein:

I like it. Make sure you let us know that you can leave a comment here if you're watching later because you know that we're live, that's fine. You can reach out to us on our website, make sure that you like, that you click, that you share, that you leave a comment. We'll be back here next week and for the next several months we've got a lot of cool things that we're going to be floating out your way and presenting to you. Thank you so, so very much and we'll see you at the same time next week. Have a wonderful day.

Zane Hamilton:

Thanks everyone.