CIQ

Peridot Deep Dive

August 18, 2022

Webinar Synopsis:

Speakers:

  • Zane Hamilton, VP of Solutions Engineering, CIQ

  • Neil Hanlon, Infrastructure Lead, CIQ

  • Mustafa Gezen, Senior Software Engineer, CIQ

  • Skip Grube, Senior Linux Engineer, CIQ


Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.

Full Webinar Transcript:

Zane Hamilton:

Good morning, good afternoon, and good evening, wherever you are. We appreciate you joining us here at CIQ for another webinar. This week we're going to be talking about Peridot, the new Rocky Build system, and I have Neil, Skip and Mustafa here today to talk about this. Neil and Skip, we've seen you quite a few times. We know who you are, but I'll let you introduce yourselves quickly. Let's go to Mustafa. I don't remember if you've ever actually been on before. You probably have.

Introductions Of Panel Members

Mustafa Gezen:

I don't think I've actually been on a webinar before. 

Neil Hanlon:

He tries to avoid it as much as possible.

Zane Hamilton:

Tell us about yourself. Mustafa, who are you? What are you doing here?

Mustafa Gezen:

Can you hear me?

Zane Hamilton:

Yeah.

Mustafa Gezen:

Hello everyone. I am Mustafa and I'm a software engineer at CIQ and I'm also leading the release engineering team together with Louis Abel.

Zane Hamilton:

Thank you. Skip. I'll go to you next. Tell us who you are

Skip Grube:

Hey, I am an engineer at large here at CIQ. I do all kinds of random things. I'm also on the Rocky Linux release, engineering, dev, whatever you want to call this team. Also, I do a lot of random things there. Mostly package builds and I have a lot of fun.

Zane Hamilton:

That's awesome. Neil, welcome back.

Neil Hanlon:

Hey I'm Neil Hanlin. I am a potato engineer and administrator of pens and office supplies here at CIQ. I'm also one of the team leads for the infrastructure team for Rocky Linux along with Taylor Goodwill.

Zane Hamilton:

Thank you very much. So let's go ahead and dive right into it guys. Let's talk about Peridot. I think the first question that a lot of people have whenever I get involved and they start seeing this stuff is, what is Peridot?

What Is Peridot?

Mustafa Gezen:

Peridot is a new cloud native build system, specifically targeted at RPM building. I don't know if you want history?

Zane Hamilton:

Absolutely.

Mustafa Gezen:

When we first started out, we did use the available tools and we found out that it worked, but it was mostly targeted at if you're actually developing the OS itself. It lacked a bit of separation that we wanted and how we wanted to manage the build system itself. We wanted to create something that we could run on modern technology and easy for us to use and easy for us to manage. Then Peridot came along.

Neil Hanlon:

I think one of the, the really important points that Mustafa touched upon was we needed a separation and an ability to hand off the ability to build these packages and manage the deployments of the packages into repositories and et cetera. For the community to be able to take part in building Rocky Linux from the ground up, including patching things and in the future trying to get rid of the problems that exist in any sort of open source that organization where you're building things and only some people know how they build. There's an order in which they need to be built to produce the right outputs. Additionally, there's some secure bits that need to be kept secret so that the operating system itself stays secured not only from a booting perspective, but from a software supply chain perspective as well. We're hoping that Peridot will allow us to iterate on these sorts of security practices that the organization itself is built on to enable the community to produce what they need with Peridot.

Zane Hamilton:

Go ahead, Skip

Skip Grube:

I'm going to do mine. Stupid simple. Like, I like Peridot imports RPM code and it builds RPMs. And that sounds simple, but actually there's tricks to it. It's a little more complicated than that, but that's what Peridot does in an organized way. We import code and we build. 

Zane Hamilton:

Kind of answers the next question a little bit, but what is Peridot not?

What Peridot Is Not

Skip Grube:

I'm going to continue. Hang on Mustafa. Hang on. Whoa, there, whoa, there. Peridot, I would say to a lot of people, because this is the number one thing whenever I talk to people about it that they have asked me, Peridot is not a general purpose building system for throwing together any pick a language. It's not a Python build system, or it's not a PHP or a C or a Go build system. It's very much geared towards the production of RPMs and it's really why it was purpose built for  in scope basically for producing Rocky Linux RPMs.

Zane Hamilton:

Thank you, Skip. Mustafa, did you have something to add to what it's not? 

Mustafa Gezen:

No, I think, I think Skip has explained it's perfectly

Skip Grube:

Sorry, Mustafa. 

Mustafa Gezen:

I gotcha. Typical Skip. 

Zane Hamilton:

I know there are other build systems out there, and you kind of touched on it a little bit. I think all three of you actually did. But tell me more about why Rocky Linux uses Peridot instead of something like Koji. Neil, I'll start with you.

Why Rocky Linux Uses Peridot

Neil Hanlon:

There's a lot of history in the tools like Koji and its related family of distribution forges, I guess is the best term to use there that take RPMs or take sources, import them, build them, and release them somewhere. Back when these sorts of tools were being developed there were decisions that were made that were available with the technologies at the time, and some of those are not how we wanted to deploy and scale the infrastructure that was building and deploying Rocky Linux. One of those big ones for me, from an infrastructure perspective, is the hard requirement on NFS as a backing store for all of the packages. In Koji, you have to store every single package and the result of all of those package builds on a single I guess it couldn't, doesn't have to be a single, but on a single NFS file share that needs to be available on every single builder.

You can imagine that when you start getting out to wanting to scale across multiple regions and also have multiple architectures that are running these builds in disparate locations that may have increased latency or other issues you run into structural challenges with having the data available where it needs to be across continents and synchronizing that atomically in a way that allows, builds to be performative. What we've been able to change with Peridot is eliminating a single file store altogether by leveraging an object store as the backend, which can scale very easily horizontally, and allows us to replicate that data and become multi-regional, multi zonal in a very easy manner. Combined with some other tools that I'm sure Mustafa wants to talk about, like a YumRepoFS, which is a microservice part of Peridot that allows us to serve repositories for builds specifically like when we're talking about Peridot and YumRepoFS, where it's a snake eating its own tail where it feeds itself, the packages from itself. That's  how we can bootstrap the packages. Those tools allow us to do highly scalable builds up to, I think two or 3000 packages we've found with our current infrastructure in parallel building at a time on, on multiple architectures.

Zane Hamilton:

That's fantastic. I'm sure Mustafa, you have, you have things you want to say.

Mustafa Gezen:

Definitely. Koji is again, an amazing system that we have relied on for a long time when we first started out building Rocky Linux. The problem that may occur is that it doesn't really manage the whole process. It does still leave a lot of room for the caller and user to take some action. Peridot on the other hand, has a separation in project, and each project has a catalog. Peridot supports both source type packages. It's a package that you yourself manage, use, you have build, but it also supports pulling in RPMs or sources from a separate project, separate target. It enables us to define beforehand, before we start building,  these packages. This we want to import. This one is something we want to build.

It also manages the importing process. What it does is we have a set structure and both the source and the target follow the same structure. A developer can't just go in and just build random pushes, commit anything from the source. We follow the same import process within Peridot and once we click import, Peridot does imports and it can apply patches that we declare beforehand. It's all traceable within it, and then push that to our GitForge. From that it pins the hash. So if something changes, if the hash has disappeared, then you can't really build something else. You need to re import and reapply the patches. It follows a traceable route of content and sources that we've built. Once we reach that point and we start building the RPMs, it also stores them and manages them and manages the versions internally.

It's not really visible in the end in the UI right now. Internally, Peridot actually manages versions and you can look at the history of versions and we can also manage how we published them. We  combined multiple tools into one to make it easy to use. We have a simple system of authorization. Also, we haven't really talked about it yet, but modules is also something Peridot supports, and it has combined into the same system. While with the old infrastructure, we would have to use multiple tools to build these special types of package groups.

Zane Hamilton:

Let's get into that a little bit. Before you say anything, Skip, I just want to ask another question. We've talked about the project a couple of times, so what exactly is in scope of Peridot, what is a project?

Peridot Scope Of Project

Mustafa Gezen:

A project is basically a distribution. A catalog of packages. An organization of repositories. Peridot manages for you. When we are building Rocky Linux 9, we have a Peridot project and that turns into the distribution. When we want to build a SIG for example, then that itself is a separate distribution that we can add onto the core. It's a group of packages and a group of how should I say this? Like an organization of repositories that Peridot manages for you.

Zane Hamilton:

Okay, thank you. Skip, now you're drinking. I'm going to call on you. You have something more to add.

Skip Grube:

I have my tea, I'm ready for anything. What Mustafa said was absolutely clear. It goes back to Peridot being tailored. It's very useful in general, but specifically for Rocky Linux's needs where he mentioned you can have your own packages or you can import automatically from another source. Obviously, that's a huge interest to Rocky Linux, which imports the Red Hat Enterprise Linux sources. Then builds them in an automatic way because we don't want Skip for example, fat fingering, Git you know, commands to import Red Hat sources and screwing up versions and whatever. Because, I'll do it quite frankly. It's much cleaner to have it done in an automatic fashion where either an automatic API call or a click a button and the latest Apache from Red Hat Enterprise Linux is imported and ready to build. That's the traceability and the, I don't want to say the automation and the cleanness of it is super important to us as a project. Saves a lot of time.

Zane Hamilton:

Thank you. Skip. This is probably a question back to Mustafa. He brought up modules. Can you give us a little bit more of an example of a module. How does that work? Can you give me an example so that I can understand a little bit better?

Peridot And Modules

Mustafa Gezen:

Yeah, of course. Modules are a group of packages that usually depends closely on each other. They specifically request that they are built together and are released together as a group. They can also declare like order and basically are packages and are related to each other. When we're rebuilding a module, if something changes, then most of the time or usually we have to rebuild a lot of other components as well. Modules can declare that, for example, if this package is built first should be built first and then the next set of packages should depend on this package. If the base package changes, then we don't really have to go and find other packages that are so closely connected to it and rebuild it. It's just declared and it's released together. On the user side users can enable modules and use these package groups. They also utilize the feature of separate versions of the modules. Both a group package that's getting built and also separate versions, multiple versions exiting at the same time within the same distribution.

Zane Hamilton:

That's great. Thank you Mustafa. I think we have some stuff popping in before we get to Tran's statement, question, ask I do see another one from Sylvie that says, what language is Peridot? Yeah. What language is Peridot written in? What is used for data storage?

Peridot Language And Data Storage

Mustafa Gezen:

All components are written in Go and the UI is in TypeScript. We use SQL for our database and we use S3 or any kind of object storage for storing the RPMs itself.

Zane Hamilton:

Okay, thank you Zoe.

Neil Hanlon:

There's a couple of pieces around Peridot at the moment that utilize Python to do some conversions of catalog options for us to do imports and builds. On the opposite side, when we export back to the mirror infrastructure that runs and delivers the mirrorless information to users of Rocky Linux there is some Python used on that side in a toolkit to help with generation and downloading of all of the repositories that are needed for the distribution itself.

Zane Hamilton:

Neil, you talked a little bit earlier about scalability and this thing is kind of a Kubernetes platform. Obviously that implies containerization. How is that different from the other? I mean, you mentioned that like Koji has to use an NFS file system and has to have everything everywhere all the time. How does the containerization part of this make this scalable and how does that work?

Scalability Of Peridot

Neil Hanlon:

We've chosen to implement this on top of Kubernetes for a number of reasons, but one of those is mostly the separation of concerns and security practices that are very well ingrained into Kubernetes as a structure for running your applications on top of. Looking at the best case scenarios or best uses of these technologies are done and trying to adopt those best practices into how to build these RPMs. At the end of the day, we're not doing really anything different than what Koji is doing, in fact, Koji and Mock as a result. We use Mock inside of Peridot to run the builds and invoke RPM build, but we also are doing it in a way that doesn't require having all of those packages everywhere all at once.

We automate the workflow in such a way that  permits us to download  and install the artifacts into these base containers for the architectures that need to build without actually having them on the systems themselves. This not only allows us to build for multiple architectures, for example, we run in AWS as well as in various physical and virtual clusters in a couple of different areas for external architectures. It allows us to scale those out with auto-scaling groups for cost saving reasons, as well as, only using what we need to use and it allows us to build with extreme flexibility when we do have something like a module, which where you might need to rebuild one package because that got updated, but actually you need to rebuild all of Pearl and in a specific order. It allows us to take advantage of the cloud to do these updates very quickly and resolve any issues when they come up in a much more reasonable manner.

Zane Hamilton:

Thank you. I think Mustafa, you touched on this earlier, if you have a package that needs to be patched as it's getting built, so something's changed, you're going through the build process and Peridot can do that. At what point does that happen? How does that happen? Is it during the import it notices that there's something that's changed and has to be applied to it? What does that look like?

Patching Packages With Peridot

Mustafa Gezen:

We have an architecture called OpenPatch where we also open source our patches. We never do any manual patching. We always declare the patches beforehand and we check them into our GitForge. Before anything can be patched within Peridot, it actually has to be accepted through a patch repo within our GitForge. It's all for traceability purposes, so no one can just create a patch and build something in Peridot without it being visible for the engineers.

Zane Hamilton:

Excellent, thank you. So plugins, obviously this thing is an open source tool and we get that question asked quite often. What else can I do with this? Are there plug-ins for it? Can I write plug-ins for it? Can I plug into it?

Plug-Ins And Peridot

Mustafa Gezen:

There is very minimal plug-in support with it. It has plug-in support, we do plan to expand that and there are probably a lot of options, a lot of things you can do after building something you can probably manage more of the catalog process around it. It's planned, but the current plugin support is very minimal.

Zane Hamilton:

Thank you. 

Neil Hanlon:

One of the things I've been thinking about in kind of noodling on is how to hook into the different aspects of a build to be able to keep an eye on the status of a package as it comes through that process.  We look towards more automation where a package update comes in. It can be imported and built into a staging area where we can then pull it from. I think we'll get into this a little bit with a look ahead and how we're going to be trying to track CentOS stream. The plug-in functionality there and being able to hook in at different points within the process. The possibilities I guess are endless here, but a special interest group, for example, might want to have notifications if a package build fails or open a ticket when a package build fails. Those sorts of things are what we're looking to go and implement into Peridot so that it can be a true central hub for all the building that happens inside Rocky Linux.

Zane Hamilton:

That's great, thank you. We have a question for Tron. Does the concept of module streams bother Peridot and can it support modules?

Modules Support From Peridot

Mustafa Gezen:

No, it doesn't bother Peridot at all. It supports modules just like module build system and it does that in a more manageable way. We kind of got away with some of the bootstrap steps of MBS and do that in a more manageable and traceable way. You can build modules and you can even create your own modules without killing yourself.

Neil Hanlon:

That's a great point.  where I had previously spoken about like a snake feeding its own head. That's exactly what Peridot does with the module situation. Whereas with Koji, they are still used for building modules and for example our Rocky 8 distribution, but that's invoked by a separate process.This module build service, which helps Koji understand how to do the build ordering and the dependency mapping between all of these different packages that might exist in a module version. Peridot understands that natively and so when it detects the package, it just goes ahead and fires off all the things it needs to do to update those packages when necessary or just build them for the first time.

Zane Hamilton:

That's great. Thank you for the question, Tron. One of the next questions, because this whole thing seems to have been moving incredibly fast and there seems to be changes constantly and more features that are coming to this, but what are the future plans for Peridot?

Future Plans For Peridot

Skip Grube:

Something that I've been working on personally is, sorry, I don't mean to steal you guys thunder. Basically, looking at the setup and the documentation around it, Peridot is, in a word I mentioned before, oh, it's real simple. It just builds RPMs and imports stuff, as you've heard from, especially from these guys a little bit from me, but especially these guys, it's hard. It's not as straightforward as all that. Because there's a lot of technical complications and so forth. There's a lot of clean documentation that's coming out that will be what we're going over here, really like what is Peridot? What can it do, what can it not do? How do I access it? What exactly can I do with it? That sort of thing. Also, how-tos. How to do this. How do I import packages? How do I create a catalog for my project or whatever, that sort of thing. That's one of the things I'm excited about because I'm big on documentation and reproducibility.

Zane Hamilton:

I'm glad somebody is. It is typically not a developer role for sure. Mustafa, I think you had something you wanted to say when it comes to the future of Peridot.

Mustafa Gezen:

We definitely want to expand upon plugins and being able to set up the SIG projects as the SIG leader wants to set it up or SIG groups want to set it up and help them customize Peridot more to their liking. We also want to make more UI improvements to make more features available from the UI rather than the API. I'm using the API more, so some features are lacking from API but also expose more of the features on the API as well. Also, we're planning to make some of the builds more stable. If you're building out of multiple sites sometimes we can have some trouble with multiple sites and that's usually expected doing multi-cluster, multi-site builds and coordinating it all together is a difficult problem.

We made Rocky Linux 9 happen and it's very complicated under the hood. The deployment process managing Peridot has actually been amazing and it has helped us get everything under one roof. You don't even realize that you're actually invoking so many sites that connect back to it and the sites do not even need to have access to the NFS file share. They don't really care, in the end you, how we combine everything together and generate the correct metadata. The old problems were that all hosts usually needed access to NFS and the latency and the problems that were bringing in, that's now completely gone. You can run builds from anywhere, multiple sites, thousands of hosts without really caring about where the data itself is.

Zane Hamilton:

That's really cool.

Neil Hanlon:

We exchanged one set of problems for an entirely different set of problems.That's what I think. That's what being cloud native is all about. You get rid of your classical infrastructure problems and you just replace them with new infrastructure problems that you can hopefully solve or at least manage it in a different or better way. Actually, what Skip was saying about documentation, one of the things I'm looking forward to is writing up some use case examples and documentation blog type style things on how you as a user might want to run Peridot and use it to customize an RPM. To rebuild something or even just understand how Peridot works from a building perspective to rebuild a simple hash package like Bash. Those are things that I'm looking forward to on the documentation front. On the technology front, like what Mustafa said, getting this UI into a place where we can make it accessible for a lot of people without a whole lot of training or usage instructions on how to get around it and manage things in there will be really good.

Zane Hamilton:

Excellent. I think we had a question that just came in. Tron, thank you very much. He's asking about YumRepoFS. He heard it mentioned earlier in the product conversation and wants to know what is it and why wasn't it called DNFRepoFS

What Is Yum Repo FS?

Mustafa Gezen:

Yum is the old name. It just stuck and it's usually called YumRepos. That is why it's named Yum Repo FS. What YumRepoFS is is how we kind of replace NFS. It's a completely virtual Yum server. When we push updates to YumRepoFS, we don't need any packages on desk to recreate the same metadata YumRepoFS. After bills, we usually cash and create metadata beforehand, store it in the database or S3 and during YumRepoFS update where we can swap in packages and swap out packages at will, different versions or build it YumRepoFS updates the metadata entry or primary lists to swap out the necessary information without having to go through the whole process of computing everything and create a new repo, which is also a very time-consuming task on the coaching side.

We get very fast reposts without ever having to pull all RPMs into disk. As we said before, we store RPMs in industry and we kind of use a small trick. The DNF data it references to an endpoint, but the endpoint just pre-signed a S3 URL and just forwards to that endpoint. In theory, we always have the NBR data always once, but we can use it in multiple repos without having to ever duplicate the NBR. They have only one RPM. That means that you can create thousands of repos. The only thing you're storing is the metadata, which is so small compared to copying around the RPM all the time.

Zane Hamilton:

Absolutely. Thank you. Also, I'm going to YumRepoFS is way easier to say.

Skip Grube:

You're right. I will say Yum up to my dying day. I will say Yum. The way it rolls off the tongues, so much easier.

Zane Hamilton:

It does.

Skip Grube:

Basically, it's something that I was, when I learned, I didn't write Peridot. I'm mostly a user. When I learned how it worked, I was like, oh, you guys are tricky because what you're doing when you access a Peridot repo to download it or to pull the RPMs or whatever you want from it. You're not actually, so normally when we access a Yum or a DNF repo, we hit XML files. We download those XML files and they have information about our packages and where they're stored. Peridot is super tricky because when you're hitting that XML file, it's not a real file. It's being generated for you on the fly. Because we generated on the fly, it can be whatever we want it to be.

We can add and remove RPMs from different repositories and what I call slice and dice them as we want. There's no limit to how many times you can mix and match different repositories. It's really the, as Mustafa said, the RPMs, when you actually go to download an RPM, it's just a redirect to S3 to get you your RPM. It's very, like I said, sneaky in a way. You're taking, I love hacks like this, where you're taking the traditional, oh yeah, we have a Yum repo and it's got some files and here's the RPMs, and we've turned it around and said, well the file or the information you're downloading is actually just generated from the database right as you're downloading it. Effectively, it's cached, but it's cached for speed. The RPMs you get aren't actually on the disk, they're just reader. They're what's the redirect code and HDP 3 0 1 or 3 0 2 redirects effectively. When I heard about that I was, whoa!  It's really cool. It's really the power of Peridot really is the core of it.

Mustafa Gezen:

Zane, I have something else to add if that's okay. Peridot is also, we have looked at different systems and how multiple distributions are dealing with it, but Peridot is the only build system that also maintains the production structure. After every build, the catalog itself, it sorts into the same structure that you would see in production. If you point at Peridot endpoints, you'll get the updates. It'll be unstable, you can get like broken packages, but it's the same structure with all the module metadata in there already even from the development endpoints. This is also something really new, really cool. Using that we actually repost sync off Peridot. Just closing it, keeping it in a closed loop and signing everything is done on the Peridot side. Once it's exported out of Peridot, you can't really change anything. If you do, the signatures are broken, even the metadata signatures. We try to keep it in one place. All operations are done within one system. After it's out of that system, you can't change it.

Zane Hamilton:

Very good.

Neil Hanlon:

Essentially what we can do then is start attesting to what is in an operating system when we release it, and what is contained in all of those updates. Providing a cryptographic signature essentially on top of them in addition to the signatures we already have on the packages as well as the metadata.

Zane Hamilton:

Mustafa, we talk about Rocky 9 and Peridot a lot. What about Rocky 8 in Peridot? Is that going to happen?

Peridot and Rocky 8

Mustafa Gezen:

Yes, it's planned. We are still evaluating options and giving the same systems. Making it easier for us. There are plans. We are planning on migrating Rocky 8 through Peridot as well.

Zane Hamilton:

Very cool. Skip, I cut you off.

Skip Grube:

Oh no, you're fine. I was just saying to expand on what Mustafa was mentioning about how when things are done in Peridot because of the way this YumRepoFS works, we can have ready to go repositories. If you're familiar with Rocky Linux, you know about BaseOS and AppStream and you might even have heard of CRB, which is Code Ready Builder. These are all the repositories that you access to get your packages in Rocky Linux. Those are ready to go in Peridot. What stop saying hypothetically, if you really wanted to, I don't recommend it, but you could point your Rocky Linux install to Peridot's BaseOS and get packages the instant they're created, for example. Again, there could be breakages, we don't know.

Neil Hanlon:

Don't do this.

Skip Grube:

Don't do it. I'm just saying hypothetically if you wanted to they look exactly like the production repos.

Neil Hanlon:

I was going to say along those same lines allows us to do what we're calling Rocky look ahead, where we're looking ahead at what's coming down the pipe from Stream by looking at all of those packages that are being built and building them in a separate project, which along with what Mustafa has talked about with being able to serve artifacts from repositories and, and, and pull those down to one place. Once we build that package, it  means that whenever we need that package as an artifact for another build, to fulfill a build dependency, or fix a bug with a regression in a build dependency, which happens fairly frequently when we need to build point releases. We have those packages around already built. We don't have to go looking for them. We're trying to rebuild them from source that might have been built six months ago.  What this means at the end of the day is that we end up getting all of the packages that we need to fulfill a point release and hopefully in the future major releases as well on dot zeros on the day that they're ready. We only have to build any packages that may have been updated by listening to the feed there. That's something that's really exciting too about the future of both 9 and hopefully 8 in the future too.

Zane Hamilton:

That's great, thank you. Skip, Tron posted a comment in here. Don't do this at home or just don't do this in general? I want to know how many boxes you've actually done this with or do you have that are running this right now? 

Skip Grube:

Breaking you by it. Yes. No. I wouldn't. Don't do it unless you're willing to put up with the consequences. I do it. I'm a Rocky Linux release engineer, I do this kind of thing all the time, but I'm crazy. That comes with the territory.

Zane Hamilton:

You just like breaking stuff, that's what you're saying. Now, Raymond has a question specifically about Spack. I'm assuming you know Spack a little bit. Can you elaborate on when Peridot is a better solution and when Spack should be preferred?

Spack And Peridot

Skip Grube:

I'm actually not sure I'm familiar with Spack.

Mustafa Gezen:

I dunno. I rely on Neil to get here. 

Zane Hamilton:

From my understanding, Spack is more of an HPC packaging tool. So being able to make sure that the right things for your HPC specific software get packaged together. Being able to do it for different architectures and actually deliver that kind of thing. Yum for HPC.

Skip Grube:

I don't think the tools are really related. Like I said Peridot, the scope of it, is for building RPMs. Ideally, in our case for Rocky Linux, but it could be any RPMs. Also importing RPM package code. There's really not a whole lot of overlap there. I said in the 8 the HPC community, I guess. I'm not super familiar with Spack or HPC in general. I can't speak too much to it, if that makes sense.

Zane Hamilton:

Yeah, from my very limited understanding of it they are very different tools for different, different things. Greg may be yelling at me through the screen somewhere. Scott Lake had a question, Is podman or Docker C replaced on Rocky nine?

Podman And Docker CE

Neil Hanlon:

Podman would be the default if you were to go and install, but there's certainly nothing stopping you from installing Docker CE as long as you don't have podman installed. There may be some ways that you can get them to install together. podman's actually pretty feature complete now with the Docker CE engine for a container runtime. I think they have almost full support for the Docker compose format as well through Podbank compose. I personally like podman a lot more than the Docker engine for the actual POD functionality of it and being able to run multiple containers inside a single name space and connect them together in that way. There are certainly times where I need the Docker runtime too. I do have both installed on my current system.

Zane Hamilton:

We'll ask this question first. Where is Peridot source located? How do I contribute bugs, docs and code?

Location Of Peridot

Skip Grube:

We're on GitHub, man. github.com/rocky-linux/peridot. Alright, did that get right? Mustafa?

Mustafa Gezen:

I think it's Rocky-Linux.  

Skip Grube:

If we could put that up. Put up the right URL if we do it there.

Zane Hamilton:

They'll correct it, don't worry. Yeah. We'll fix it.

Skip Grube:

Yeah, and we have issue tracker there. GitHub has excellent tools for that kind of thing. File bugs, merge requests, pull requests, whatever you want to call them, check it out. Yeah,

Zane Hamilton:

That's great.Skip, I think we had talked about doing a little bit of the demo. Showing the UI interface. 

Demo Of Peridot With Skip

Skip Grube:

Showing is better than telling. I'm going to go ahead. I'm going to see if I can share my screen here. Share screen. Okay. Give me some tips. Yep. Sorry, I'm not super okay, hang on a second. Hang on. We're going to do this here. Yes. We're going to ba bam. Can everybody see that?

Zane Hamilton:

Hey, there we go. Yay.

Skip Grube:

Okay, cool. This is public, by the way. Rocky Build. Rocky Linux has a community project. Our build system's public, we don't care. We love people to see it and show it off. peridot.build.resf.org. Anybody can go here, unauthenticated, of course. We're just going to poke around here, you see already. We have projects. Mustafa was talking about the projects. Hang on a second, let me move that out of the way. We have several here. You see the one that we've been talking about the most here, Rocky Linux 9. However, we have, as you can see, we've talked about SIGs, which are our special interest groups, and they're for building all kinds of things. For example, I know there's, GO! Raspberry Pie. We make Rocky Linux for Raspberry Pie.

This SIG altarch is where the packages related to that are built. I'm just going to poke into here. Rocky Linux 9 will definitely have the most packages and builds. I'm just going to kind of poke around a little bit.  Hang on, I see a comment there. It says please zoom in a little more. How about bigger? Bigger is better. Yeah. So just pull it up on your own machine. It's okay. It's cool. But yeah. We'll go to tasks here, for example, and you can see us as we do things you can follow along. Yeah, it's great. I mean, we can see we had some trouble building some of our graphical Vulkan tools here that got sorted out. You can see when it happens, who did it, et cetera.

Here's some, lots of imports, and like I said, there's two types of tasks for the most part. There are imports and builds. We import code from upstream, in this case it's Red Hat Enterprise 9, and we build it. You can check it out. Check out all our logs here. You can see under the hood. We're just using the mock program if anybody's familiar with that, which is a wrapper around the good old RPM build. You can see everything. Investigate for yourself. I love investigating everything that happens here. Some of these, the logs are way up here. Oh, that's just a destroyed pod. For example, we also mentioned these are our packages. We said that the repositories for Rocky 9 are created, ready to go in Peridot and we weren't lying.

You might recognize some of these names in here and you can actually see what we have.Let me click on Base OS here and we'll show you there's filters and such that show you, hey, all of these packages have an include list, an exclude filter, and then a list of packages. All of these are what's in Base OS and Rocky 9. We build for five different architectures. It's got everything in here and yeah. You can, like I said, explore on your own. I love exploring. That's my thing. I'm going to show you a different window here real quick.

Neil Hanlon:

While you're doing that, just, just comment like all of the data that goes into those repositories and how we feed those filter lists and everything else, all in git op. We were talking about the patches that we apply, there's full transparency and visibility into what we're doing and how the packages end up where they end up.

Skip Grube:

This is another window, and this is me authenticated here, and I just want to show that we have a few extra options here that you won't see if you're just browsing this for the internet.

Neil Hanlon:

Sing it again, Skip.

Skip Grube:

Thank you. Thank you. How about good. All right. And that's by design. What we're going to look at here is what we're talking about here when we say we want to import and we want to keep things extra clean and extra automated and traceable, I guess is the word I'm looking for. Here we have, we are basically, we import from git.centos.org, which is where the Red Hat sources are located. All of the sources are C 9 branches. If you look on there they all have it's not really CentOS 9, it's RHEL 9. It's just done that way for historical reasons. We literally bring them in our git.rockylinux.org and we convert them to R nine for Rocky 9, of course.

It's very like I said, it's, you can check out what we're doing here. If you come here and you see, for example, let's look back at the failed builds here, you see a lot of this happen, you can be nice to us on chat. If you're in our chat, we're probably having a bad day. Just be cool if we're a little short with you because well we got packages failing and we have build them. This system is how we're able to do something that we're very proud of is updates, particularly security updates to all these packages come in, we're able to detect them from the upstream Red Hat sources, import them to our system and build them usually in less than 24 hours. It's very very automated and very, I don't know, fluid in the way that we're able to, to tackle this stuff. Oh whoa, whoa, mirrors.

Zane Hamilton:

Okay, that's awesome.

Skip Grube:

Okay, I'm going to, this is some space balls here. Hang on a second. Let me stop sharing. Oh, there we go. Okay, thanks guys. Yeah, okay, cut me off here.

Zane Hamilton:

Skip, if one of those turns red and it fails, does somebody get notified or is it just people watching pay attention again?

Skip Grube:

Usually, builds are, and this is something that Mustafa and Neil can definitely chime in more. Usually builds are triggered by people whether in mass. Sometimes, we'll do 20 or 50 at a time or individually. We want to build. We only have like three updates we have to do. We build that and generally speaking, we just watch it. It's very simple to just red or green. We're not colorblind fortunately. It's very X or check, you don't even have to be colored, you don't even need colors. It's a little manual right now, but I think Neil and Mustafa can speak to it. We're looking at a new enhancement. We want to automatically open up bug reports whenever something fails, I think. Mustafa, you want to say more about that or?

Build Batches And Partial Support

Mustafa Gezen:

Yeah. That was the plugin part that we were talking about watching queues. We'll probably only report automated and import and build failing. Since, if you trigger a build manually, you probably are watching it. We also have a cool feature called Build Batches so we can go over the batch and see what's in the batch fails so you don't have to scroll through and see what failed. You should get that information already. Also, I just want to mention we do have partial support even though this is RPM based it doesn't have to be like Red Hat based. We do partial support for SU and bills as well. That's also cool you can configure that per setting. You don't need to set up a new build system specifically for a specific type of RPM distro. We can just create a new project if you want to manage that locally or in your own Peridot instance.

Zane Hamilton:

Very interesting.

Neil Hanlon:

Some of those things about building packages and not only just the automated side, but submitting them in batches, et cetera will be building our own sort of set package fed package command.It will help us as release engineers and community members as part of special interest groups, build their packages, check on their packages, and interact with them from the command lines, which are clicking around at Gooey if they don't prefer to do it that way.

Zane Hamilton:

Very good. we're actually getting close on time. I'm going to give each one of you the last. What would you like everybody to leave knowing about Peridot? I'm going to start with Skip. 

Concluding Thoughts From Skip

Skip Grube:

Oh no. Yeah, just that it's really cool that just what we use it for behind the scenes in Rocky Linux is too many people don't know anything about what happens behind the scenes. It's super important. It's really that people ought to understand more about how packages are imported. Where they come from. How we're doing that in a secure, automated way so that we get the correct sources so that basically, so that your Rocky Linux distro is good. It's exactly as we meant to build it.

Concluding Thoughts From Neil

Neil Hanlon:

Yeah. Building on Skip, just the systems that we've put in place in the manners in which we've implemented the features in Peridot and designed it with a purpose in mind. We were, and I say we lightly, Mustafa was very opinionated in how some of these features are built for Peridot because we wanted them to be done in a way that allowed us to provide traceability and accountability for what's happening in the distribution. Days after Rocky Linux was formed, as a project back in December 2020, there was the major Solar Winds hack where they found that there was code running on millions of people's systems around the world, which had gotten there from supply chain security. That's something, which has been on all of our minds since almost literally day one. Peridot and the infrastructure that we build around it is our answer to how do we know when I download Rocky Linux that it's really Rocky Linux and no one has put in something that shouldn't be there.

Zane Hamilton:

It's fantastic. Thanks for calling that out. Neil. Musta, I'm going to give you the last word on this one.

Concluding Thoughts From Mustafa

Mustafa Gezen:

Since I'm mostly on the developer side, I would make it easier so people can start developing it and we are actually working on a lot of improvements. One command cluster, you can actually test locally, you can test your changes to pair it locally and PR to back to the R E S F and test it and it should be easier. We want more people to join us. Help us build it out. We want to have more documentation and easier deployments and just working towards making it easier to manage, install and work on.

Zane Hamilton:

That's great. Thank you very much guys. Thanks for joining me today. I appreciate it. Thanks for giving us a deep dive into what Peridot is looking forward to, what comes out at the end. Looking forward to getting Rocky near Mustafa at some point. Really appreciate you guys joining. Like and subscribe and we will see you next week.

Mustafa Gezen:

Thanks everyone.