CIQ

Radioss (Altair open source project)

October 13, 2022

This week we will teach you how to create an OpenRadioss Apptainer container image from a daily released repository and run the 2012 Toyota Camry Detailed Finite Element Model Toyota Camry Impact Model in LS-DYNA® format using the OpenRadioss Apptainer container.

Step-by-step guide: https://ciq.com/blog/running-camry-impact-model-in-ls-dyna-format-using-openradioss-and-apptainer/

Webinar Synopsis:

Speakers:

  • Zane Hamilton, Vice President- Sales Engineering, CIQ

  • Eric Lequiniou, Vice President of RADIOSS Development and Altair Solver HPC, Altair

  • Dave Godlove, Solutions Architect, CIQ

  • Brock Taylor, VP of HPC & Strategic Partners, CIQ

  • Johnathan Anderson, HPC System Engineer Sr., CIQ

  • Forrest Burt, High Performance Computing Systems Engineer, CIQ


Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.

Full Webinar Transcript:

Zane Hamilton:

Good morning, good afternoon, and good evening wherever you are. Welcome back to another CIQ webinar. My name is Zane Hamilton and I'm the Vice President of Sales Engineering at CIQ. We, at CIQ are focused on powering the next generation of software infrastructure, leveraging the capabilities of Cloud, Hyperscale and HPC. From research to the enterprise, our customers rely on us for the ultimate Rocky Linux, Warewulf, and Apptainer support escalation. We provide deep development capabilities and solutions, all delivered in the collaborative spirit of Open Source. Today, we've got a really exciting webinar for you, learning series around containerization, and we're going to present a case study running on a new application. If we could bring everybody in, that would be great. Welcome in everyone. So most of these faces should be familiar, but we have one that's not. So, Eric, would you introduce yourself and tell everybody who you are?

Eric Lequiniou:

Hello. Thanks, Zane. So, yes, I am Eric Lequiniou. I am responsible for Radioss development. In fact, I started in Radioss 28 years ago. I started with developing the par version of the code which is my specialty. So, performance computing applied to mechanical engineering. I am really pleased to be here to talk about a new project that we started one month ago which is Open Radioss.

Zane Hamilton:

Thank you. I'll let everybody else introduce themselves and we'll dive right into it. Dave, you're next to me, so I'm going to let you go next real quick.

Dave Godlove:

Hey, everybody, I'm Dave. Godlove. I used to be a neuroscientist primary researcher working at the National Institutes of Health. Since then I became interested in high performance computing, and through that conduit became interested and started working in containers, the singularity project, which ultimately became Apptainer. And so I'm glad to be talking about this newly containerized application today. This is really cool.

Zane Hamilton:

Thank you. Welcome back, Dave. Brock.

Brock Taylor:

Hi, I'm Brock. HBC here at CIQ, longtime colleague of Eric. I'm thrilled to have him on today. He is a bonafide rockstar in the simulation world and has been driving a lot of different things through the years. We got involved in a project really quickly with them in this latest adventure. I don't want to say too much and steal too much thunder. It's a pleasure to be here. And again, Eric, thank you for joining us.

Eric Lequiniou:

Thanks.

Zane Hamilton:

Thanks, Brock. Jonathan, I feel like it's been a while.

Johnathan Anderson:

Happy to be here. My background is in HPC systems administration, and with CIQ I'm on the Solutions Architect team. We do our best to put together cool solutions like this for customers and demo purposes.

Zane Hamilton:

Thank you. Forrest.

Forrest Burt:

Good morning, everyone. My name is Forrest Burt. I'm an h HPC systems engineer here at CIQ. My background is in the academic HPC space. I was an assistant administrator on some high performance computing architecture. I'm a member of the research computing team at a major American university for a while. Now I'm at CIQ where alongside Jonathan, Dave and our solutions architect team, we work on building out cool stuff with Open Source and HPC. Thanks for having me.

Open Radioss [3:59]

Zane Hamilton:

Thanks, Forrest. All right, Eric, let's dive right into it. Tell us, what is Open Radioss?

Eric Lequiniou:

Yeah, that's a good question. So Radioss is a standard for crash and safety since more than 35 years. So it is used by car OEMs to build five star safe cars. It's used also in aerospace, in consumer goods, electronics, defense, in many applications. Every time there is very highly linear dynamic events when only explicit solution can converge. That's really the aim of a code like Radioss. Beside all the capabilities developed during 35 years, we have seen that more and more new innovations in the code are coming from a partnership between academic researchers and industry partners. This is why we think that with the rapid transformation in the transportation industries, thinking about more sustainable energy and autonomous vehicle, we really need to innovate faster to increase the space of innovation based on collaboration with best scientists and specialists around the world to address the challenge that the industry customers on tele project.

We think that having an open source version of Radioss is a fantastic enabler to do that. So with Open Radioss, we offer to the community the access to 35 years of great libraries of elements. And then you can really focus on new research. This is the aim of Open Radioss to provide this fantastic platform to the community, and then to let researchers do their research. The important thing is that then if we have those research in open Radioss, the knowledge of the development team is how to bring those research and proof of concept to industrial feature. This plus of the team is to provide this platform for the research, but at the same time, to maintain a high level of industrial software like we did in the past, but now we can do it faster with all the community. This is why we want to maintain two versions. Open Radioss is now available to the world, the Open Source version of Radioss and Radioss, the commercial version that is here with, let's say, additional maintenance and robustness with versioning, with things that we don't want to compromise. So in a few words, this is open Radioss.

Open Radioss Uses [7:21]

Zane Hamilton:

That's fantastic, thank you. One of the questions I have is about simulating automotive crashes. What else are people doing with this? I'll open that up to anybody.

Brock Taylor:

The OpenRadioss.org site actually has a lot of models on it. And there's one I keep advocating we go do, and that's the model of the soccer ball hitting the goal post or hitting the crossbar, and then modeling whether the deflection bounces into the goal or out of the goal. And I forget which match this motto was particularly talking about, but I think England winning the World Cup maybe have been a similar situation, and I think they even saved the patch of grass that the ball landed inside the goal. There's a cell phone drop model. What I love about the selection of models, Eric, that you or Altaire picked when you stood up, is it's not just those car crash models that in the HPC space are really well known, but it's the element of product in almost any space can benefit from simulation.

If anybody on the call has not dropped their cell phone once, I challenge you to raise your hand because I know I've dropped mine multiple times. Anything that actually ships in a box drop, what's the packaging inside the box design itself? You're talking about the cases of simulation that are quickly spreading to everyday life, not just the design or something of that high-end HPC nature.

Eric Lequiniou:

At the beginning it was in the defense that colleagues that were emerging then the automotive and industry adopted such technology. With the progress in terms of hardware the necessities in the past was a big computer to create a vector machine. Now you can do a simulation with your cell phone. I mean, maybe not today because we didn't on such an OS, but this would be as powerful as an old crane. Domains can use Radioss. Every time you have an ID dynamic event, a cell phone drop test is something very difficult to simulate because you have so many small pieces with some intersections. It's not always a very keen mesh because those guys have a need to advance very fast research and to do the crash very, very fast. So the code has also improved from years to years to be able to index such cases, which are also very complex, even if they're, let's say, maybe less impressive than a car crash.

Zane Hamilton:

Thank you. Alright, Dave, since you're the container guy around here, I'm going to ask you why is it important to containerize this?

The Importance of Containerizing Radioss [11:06]

Dave Godlove:

Well, I think Eric just provided a really great reason. It's really great that we have this opportunity to take the software now that has been open source and to be able to leverage all the benefits of containerization that have been applied in a lot of other domains now. Not only helping out people who are using this software, but introducing this whole new community now to the benefits of containerization and the benefits of using this. So Eric was just talking about how this code used to be that you would just run it on some huge dedicated Cray system long ago. But now, you might run it on your laptop, you might run it on a desktop system, you might run it on the cloud. In the future, maybe you might run it on a cell phone or who knows what.

If you've got this thing containerized and you can easily port it from one place to another, that's one of the great benefits that you get from containerization is just being able to duplicate and run the same code in multiple places. The demo that I'm going to be running through today that Yoshiaki Senda has put together is a demo I actually ran on my own laptop. Since it's containerized, I could take that same demo and put it up on the cloud or run it on an HPC system or whatever, and instantly get the benefits of more computational horsepower. It's also great too because Open Radioss is not terribly complicated as far as dependencies, but it's not released as a pre-compiled binary or anything either.

You do have to do a little work to build it and to put it together. Down the road this might be something that can be put up on a registry that instead of installing it on their computer or even building a container, can just grab the pre-built container and run it as is. And that's another great benefit that is used quite a bit.

Zane Hamilton:

My next question is what is the right container platform to do this then? We talk about Apptainer a lot and how easy it is to make it portable, and to your point, why is Apptainer the right choice for this?

Why Apptainer Is The Best Choice Platform [13:37]

Dave Godlove:

Does anybody else want to jump in and talk a little bit more on containerization in general, why it's the right thing to do here? Is there anything I missed?

Brock Taylor:

In a commercial world, what the container helps do is maintain those versions that you're using. So if you need to go back and repeat results, you've got a container that's built on a specific tag of the source tree. You're going to update over time, presumably, but you can retain those containers. So you can always go back and use older containers to verify previous results. And depending on the case, that could be very, very valuable in a design process. If you're a researcher, obviously being able to distribute the container that you're using so other people can verify your results is also highly beneficial. And again, you need that reproducibility and Apptainer helps satisfy some of those needs.

The environment we're in is the pace of new silicon and technology coming fast and furious. And when I first heard that Radiosss was going open source, I thought, this is a really gutsy move, but it quickly went from this is not gutsy, this is brilliant. Because you're talking about multiple CPU architectures now. Each vendor has special things that they're introducing inside of CPUs. You've got stacked cash, you've got high bandwidth memory, you've got all these different aspects that developers are having to think about. Then you've got accelerators and all three of the major silicon providers have their own accelerators with their own special software stacks that you can go through. So again, it's just permutations of all these things that developers have to face. And Eric, it probably keeps you up at night sometimes, right?

Putting this out in the world for people to contribute gives a way to come in and actually increase the speed that you can respond to the advancements that are coming fast and furious in Silicon. And who knows what custom accelerators are around the corner. EDA is a booming industry. So as a developer you've gotta handle all these different things that are coming and pick and choose where you go. By having a community, the professional Radioss clientele is going to  benefit greatly from the rapid advancements that can come through the Open Source community, but still have that professional service around it. So I think containers, to wrap it back, really help distribute those different versions and it's just going to help increase the reach of people being able to get into simulation with Radioss and actually move to varying levels of how they can use it. And again, it's highly complex to do this kind of work, but the future is as much as you can put into it. And we're helping ease the technology integration for those users.

Eric Lequiniou:

This contribution is a fantastic example for two aspects. First because it illustrates perfectly one thing that we want to do with Open Radioss is to facilitate back and to make an accessible software. And I think containerization is a great example of making the software more accessible to a wider community of people. We are still a little bit old school. We develop with script with things like that. And we think that everyone can do that very fast and very easily. It's not so easy, it's old school when you just launched your content and you have everything that has been already set and you can redo it, let's say instantly. The second thing is also how quick you did it, because I think that this is also the power of this community that we have everyone jump on it and put innovation like that in many different domains. I thought, wow, they did it in over one weekend. I really want to say big thanks to you, it's incredible to see that.

Brock Taylor:

I'm going to echo that. Yoshi-san, he did it faster. It took us longer to do the writeup of what he did than it took him to actually get the container up and running and show it running. And of course we did it on Rocky Linux as well. It was really fast. It just speaks to the brilliance of making this move. I think it's going to pay huge dividends for everybody because it's going to bring a lot of people in.

Why Apptainer**? [19:01]**

Zane Hamilton:

That's fantastic. Thank you guys for that. I'm going to go back to my other question of why Apptainer?

Johnathan Anderson:

Apptainer makes a lot of sense for this kind of application because it's a batch processing application. It makes sense with Apptainer's execution model versus other containers. There are other batch processing container execution systems, but container's the most well publicized and distributed. And then there's also just performance benefits from SIF as well, especially on shared file system environments. These are things we've gone over before. But it also just makes it really easy. And that's most container systems are pretty easy, but especially with recent developments and Apptainer doing unprivileged builds, being able to turn those over as a regular user. Anyone on most modern HPC environments could do the same work with the same full stack customizability of your deployment environment, the OS and libraries that you need to bring in for an application like this. It just makes it really easy to get up and running in any environment.

Zane Hamilton:

That's great. Thank you.

Eric Lequiniou:

I remember that I saw Singularity, and I told them as a favor of Apptainer, and I remember that I saw something and yes, see how the benefit for a large NP application to run on container. Radioss is compute intensive and , so I think that's a great choice to start with containerization.

Dave Godlove:

There's a couple more things too. So Apptainer is just the perfect fit for this, right? It's not like a service that's supposed to run, it's an HPC computational process.You get benefit from running it as an executable, which I'll show a little bit in the demo. So you get benefit from running the container itself as an executable file because the way that it integrates with the container run system. It also interfaces as Jonathan and Eric mentioned too with MPI. Apptainer famously was pretty much through Singularity. Its predecessor was famously the first container solution that was MPIware and itI integrated seamlessly with MPI. Some of the great work that Jonathan's done recently has allowed you to run containers with MPI completely containerized now through Apptainer.

There's also a visualization component in which GPUs might become important if you're using, like, ParaView to visualize these simulations. And so now you've got the seamless integration that Apptainer has with GPUs. It's bread and butter, exactly what Apptainer was made for. It's the perfect solution for it.

Johnathan Anderson:

I'm curious, Dave, real quick, the visualization part of it, is that something that you can ever visualize in real time, or do you run your simulation and then do the visualization after the fact in all cases?

Eric Lequiniou:

I can speak about that. In fact, when we run Radioss in parallel, it is generating an animation file. So it's possible to load your animation file at the same time that you run the simulation. So you do not need to have the simulation finished. You can load the animation, but it's not fail time. You can, let's say, process the animation on it. Sometimes people do that because if you see that, for example, your simulation is not going the way that you want, you can stop it, change things, parameters. Having some interaction between the simulation and the animation is good. One aspect which is important with Open Radioss is to be open to many applications. So while is the best processing tool to run Radioss, which is Alta Project, commercial Project, we now also offer an interface to ParaView on VTK. We have a converter so you can convert your animation at the same time they are run by produced by the server to visualize .

From a cultural standpoint, the Singularity community, the container community, CIQ all these organizations exist really to support people first. They are very people-oriented and we develop software, but we develop it with people in mind. And so because of that, I'm super excited about this type of software being something that we can really support and partner with Alteron. We're realizing that goal of helping to make the world a safer place.

Johnathan Anderson:

Speaking directly to the value of Open Radioss as an open project. Brock put a note in our internal chat that said, "Hey, it would be cool if we could containerize this." Immediately everyone agreed and there were multiple people working on it at the same time. Yoshi came out with a successful deployment really quick off of that. But immediately that was turning back around to trying to contribute to the project and, hey, these are things that we've learned, these are things that we think would improve the build system or the build instructions, things like that. And it's really great to be able to collaborate so immediately and so quickly just days after the project was started. It's been cool.

Eric Lequiniou:

Fantastic.

Brock Taylor:

What we did first was just new compilers and things of that nature. But if you want to build optimized versions for whatever platform you're running on, that takes a lot of different elements of using the right compilers, the right options, and lots of effort. And a community is able to get the different people involved to put those branches in place. And again, something like Apptainer makes it easy to capture exactly that one thing for what you need and distribute, use, and curate it. The real value here is that what you're going to see is more professional Radioss users moving that direction because they can get in and use open Radioss to figure out how to fine tune how they use it.

At the end of the day, everybody is focused on designing and using the software as a tool. You're going to see the revenue base for Radioss increase. That's why I've gone from thinking this is a gutsy move to this is a brilliant move. We've all said it at some point, simulation, the more it's used, the better and the more innovative things are going to come out. We can't think of what's going to come out if we get this in the hands of more creative type people who have this type of access to this type of tool.

Eric Lequiniou:

And we really want to democratize Radioss and the usage of such a craft for a wider community. This is what we see with the open source with initiative, with other initiatives we have seen crazy things done with Radioss that are really wonderful to see.

Containerizing Open Radioss [27:47]

Zane Hamilton:

Thank you, Eric. How did this actually get done? How did Yoshi go about doing this and containerizing Open Radioss?

Dave Godlove:

Let's dive into this a little bit. This is a blog post that was written by Yoshiaki Senda and a few other people, but he did the technical work on it. Yoshi's name has come up a few times now. I just want to make sure that we give him a shout out and say that this is really his work that I've got the pleasure of going over and talking about today. He is in Japan, so it's really, really late for him. So that's why I'm going to be presenting this instead of him. But yeah, he deserves all the credit for this. This is really great work that he did really quickly. So we'll post a link to this blog post. I picked this up a couple of days ago and went through it in detail, and I was able to run this with very little effort on my laptop and get the simulation running overnight.

Installing Apptainer [29:11]

So I'll show you how I did that following along with this post here. Of course, you've gotta install Apptainer if you want to build and run this container. There's some information there for how to do it if you are unsure. And then Yoshi has posted this definition file, but there's going to be some departures also in this demo from what's actually put here. We're going to update some of the things in this blog post. So since this blog post was created, Yoshi actually created a poll request upstream to the Open Radioss GitHub repo, which has been accepted. So you can get the definition of files straight from the GitHub repo now you don't have to copy and paste it here.

Upstream Repo [30:05]

It's a little bit improved over what's been posted here. So let me just go ahead and jump over to my terminal where I'm most comfortable. I am in a directory called Data Containers Open Radioss. I have some data here, which I will look at later, but for right now, we don't have to worry about it. Following along with that blog post, the first thing that you have to do is just go ahead and grab the upstream repo like so. And once you do that, you can CD into the container sub directory, and you'll see the definition file right there. And so you can just go ahead and use this to build your container, using Apptainer right from the get-go.

I'm going to show you a few little tweaks that I would make in the definition file, but you can build this just the way it is as well. Either way works. The definition file is great as is. There's a make here, which has this dot dash J option, and it's empty right now. The intention is you would go ahead and put the appropriate number here, depending on how many cores you actually want this to compile on.

Definition File [31:35]

Maybe I should go through the definition file a little bit. So we're starting here we're going to pull a container from Docker hub as a base, and the container that we're going to pull is Rocky version eight. So this is going to run on a Rocky eight base. If you're new to Apptainer, the definition file consists of a header and some little scriptlets, which all begin with this percentage sign and then have a keyword.

This scriptlet post happens after you have the base container set up from the header, so post setting up the base container. Here what we're doing is we're using DNF to install development tools and then to install basically a bunch of compilers and a bunch of dependencies. Just standard things like w get and get and patch and stuff that you need in order to make these things work. We're going to see the end of the temp directory. It's notable that it's something that you should note when you're building containers with Apptainers that the temp directory is shared between the container during build time and your host system. So if you CD here, you're seeding into a shared directory. We're going to download some MPI. We're going to untar the MPI CD into it, and then we're going to configure, compile, and install it.

Cloning the Repo [33:04]

And then we're going to CD into opt. We're going to go ahead and enable some large file system goodness from GitHub so that we can download some big stuff from GitHub. I've already cloned the repo and I'm actually inside the directory, which was created when I cloned the repo. And now we're going to clone it again, but this is actually going to be inside the container, and this is going to facilitate us being able to build this package. Now, here's one of the places where I would do just a few little tweaks, and I'm going to tell you why. You don't have to. So what we're doing here is we're building from the main branch, and a lot of times, main is the development branch.

And so what you're going to get here is the latest greatest bleeding edge version of open Radioss, which might be what you want, but for reproducibility purposes, or just because it makes it easier to go back and see what you've done and maybe alter it. I would suggest copying that line, comment this one out, and I'm going to remove the depth equals one. And what that's going to do is it's going to allow us to get multiple commits when we download it, not just the last commit, but multiple commits. And then what I'm going to do, oops, yes, let me save that.

Let me get back out top, and I'm going to use get log to see what the most recent commits are in this GitHub repo. So I see that this is the commit that I'm actually on right now. I'm going to go ahead and copy some of the beginning of that hash, and then I'm going to go back into that definition file. After I CD into the get directory, I'm going to do a get checkout and then net hash. And what that's going to do is it's going to pin this definition file at the current version of Open Radioss. So if I have some kind of a bug or something like that, or if I want to reproduce this build later on, if I run it the second time, it's not going to be a different build, it's going to be exactly the same build. But that's just a suggestion. You can also build from master if you want the latest greatest, or if you want to just, like, update your build every time.

So now we're cloning the repo. We're going to CD into it and pin it at a particular place, and then we're going to use the scripts, which come with open Radioss to go ahead and build it inside the container. We're going to CD into several different places and do that. And then there's several environment variables which need to be set up. The section percentage sign environment is going to set those up for us and make sure they're always there every time. And that's pretty much it. This is less than 50 lines long. It's not an overly complicated definition file. I will say there's a lot packed into that little definition file. There's a lot of comp, there's a lot of compiling, there's a lot of computational work that has to be done based on that little definition file.

Temp [36:39]

There's one more little gotcha thing that I wanted to cover here as well. And that little tiny gotcha is that you noticed that we CD'd into temp and then we started to compile things on many systems. Temp is mounted to the system in such a way that you're not allowed to run executables from temp. It's just a safety consideration. Temp also, depending on what your environment is, might not be that big. And when you're creating containers, you need several times the size of the ultimate container to be able to create it because you create copies of it and you move it from one permutation to another. So because of that, you might run into trouble if you just run it as is. I would suggest you set this environment variable Apptainer and you set it somewhere else, VAR temp would be a good place to put that.

After you do that, what that's going to allow you to do is run a build with the fake root option so you don't have to be real root because you don't need to be able to compile or to run executables or do whatever in temp anymore. I would just go ahead and do that and that way you're not also going to overflow temp, you're not going to fill it up or whatever. So we're going to go ahead and do this. I'm not going to go through all this build. Also, this is an unsafe thing that you need to remember to do. Make sure your bind path is not set because in recent versions of Apptainer, you can actually bind directories from the host system into the container at build time, which can be pretty dangerous, actually. So let me try that again. This is going to take a while because my system is not very powerful and there's a lot of computational work to do here. So I'm just going to control C out of it, and I'm not going to go to completion, but I made these changes earlier today, and then I double-checked that I could actually build the entire container from start to finish just to make sure that I didn't introduce any bugs and it seemed like everything was okay.

That's a quick and easy way to get the definition file instead of copying and pasting it from here. You still can copy and paste it from here, this one works as well. And then you can just build your container and get moving. Do we have any questions or comments on that?

Brock Taylor:

We can comment Eric said earlier, this code is something that actually used to be targeting Cray supercomputers and things of the like, so the fact that it's actually running on a laptop is, is quite remarkable through the years. But again, it's the number-crunching part that can take a while. We have to do that time-elapsed thing or we'll just be sitting, staring at the screen.

Dave Godlove:

Staring at a lot of Fortran building.

Copying the SIF File [40:03]

Which is cool. It's cool to see my laptop building Fortran. So one of the cool things that Yoshi suggests that we do here is copy the actual resulting SIF file to a place on your path, so you can run it as a binary. I've already done that.

I just called it Open Radioss, and I copied that into my home bin directory, which happens to be on my path too. And so now I can just call it and run it as an executable, which is pretty helpful. You could run this without MPI, but if you want to run it with any kind of speed at all it's important to have MPI installed on your system. So I went ahead and did that as well.

I didn't actually talk about the introduction, what this is and what we're doing. I just dove right in. So this blog post is really cool because it's taking a real car, which has really been modeled in what looks like a lot of detail. And it's going through a car crash simulation. This is a 2012 Toyota Camry that's crashing. And then afterwards, you have a ton of data about what parts of the car were stressed during the impact, and what forces the front bumper experienced versus the cabin and stuff like this. It's really, really cool to be able to run this simulation overnight on my local machine. So here are all the data that you have to download, which includes the model of the car, and all the bits and pieces that Open Radioss needs in order to do the simulation. That was that little directory, by the way, that I told you I'd talk about later. I've already cheated and downloaded that ahead of time. There's this module load command that assumes that you've got some stuff installed in your system. You can do the same thing.

Commands [42:37]

So I just added this directory to my path, and that seems to be sufficient at least for what we're doing here today. So that takes the place of that module load command because I don't have modules installed on my laptop. And now we're to the actual show, the actual cool stuff to do. So there's a couple of commands here. The first one sort of sets up the problem and Eric, jump in anytime because I'm not going to be probably explaining this properly, but I believe that the first one sets up the problem according to the number of processes that you want.

Eric Lequiniou:

Yes, we specify a number of MPI process that we will run. In fact, the starter is, let's say, a pre-processing tool that will read the data and then do some, let's say, initialization checking and prepare what we call the domain decomposition. So we run the domain decomposition and we will build from the starter, so-called restart file. And then the simulation, the engine will start directly with its local memory on eight MPI processors.

Dave Godlove:

I was going to start it while you were talking, because I thought that that might kind of illustrate what you were talking about.

Eric Lequiniou:

I can add one or two examples because this tutorial from Yoshi is really wonderful because it shows also that you can directly download this model. I think it's downloaded from the CTSA website. And it downloads, not a Radioss model, but LS dyna input deck. So another, let's say software well known in the explicit code. And in fact Radioss is able to read this popular format. So in fact, here we are able to directly read and process the model with Radioss from let's say this format and this model, which is available on it.

Home Directory [45:01]

Dave Godlove:

Cool. And you may have seen too that I experienced a little error here when I first tried to do this. It told me, oh, you're in trouble. You can't run this. And the reason was that it said that it couldn't find this input key that I gave it the instructions for setting this problem up. The reason it couldn't is because for whatever reason, my current working directory wasn't properly by-mounted into Apptainer. It just tried to drop me into home instead. If you do run into that issue, you can solve that just by exporting the Apptainer bypath. This is why I had it set previously and why I almost got into trouble when I was building the container. You can set the Apptainer bypath to be the place where you currently are, where you've got your data.

Johnathan Anderson:

And I think that's typical of Apptainer runs, isn't it? That it only binds in your home directory. So if you've got something outside of it, you would have to do that explicitly, like you've done.

Dave Godlove:

It unfortunately depends a little bit on the version of Apptainer. That's something which has been a little bit buggy sometimes. It is supposed to bind also your current working directory, and under certain circumstances it does. But I think that it might choke on that if your current working directory is a long nested sub directory, it might only get to a certain depth or something.

Johnathan Anderson:

Fair enough.

Dave Godlove:

So we have zero errors. We had some warnings, but we're okay with that. And so this is basically, I guess, Eric, reading in the model and reading in the specifications of the problem, and then taking that problem with the model and splitting it up actually into these different files, which can be used by the different eight processes that I specified in order to set up the MPI problem.

Eric Lequiniou:

Yes, exactly. And in fact, just to add that the demanded composition is fully automatic. We have, let's say, a heuristic to optimize decomposition. And then the advantage is that you can run the engine directly with this own data. The data already split, and then the simulation can be processed with the MPI.

MPI Run Command [47:26]

Dave Godlove:

Cool. And so that's exactly what I'll do next. So I've got this MPI run command that I've got saved in my history. When I actually did this, I actually did this with 16 cores instead of the eight. But then my computer only has 20 cores in it, so I was being a little bit risky on whether or not it was going to be able to handle that okay. But it did with no trouble. And it ran overnight. I'm only doing it with eight right now because I'm streaming video right now and I don't want to get all and whatever. So I'll go ahead and start this. And like I said, when I ran this with 16 cores with this particular model it ran overnight, so obviously with eight cores, I'm not going to be able to show you this entire demo from start to finish. But I have done it already and I can show you the end by jumping ahead. But this is what it would look like and it's cool. Let me open up a new terminal window.

Oh, let me make that bigger so y'all can see it. If I run a PS right now, I can see that I've got my MPI running and this is one of the benefits of using Apptainer is because I have this MPI running and then underneath of it, you can tell this is an older version of of Apptainer because it's running this starter SUID script, but this is the script which spawns each one of my containers. So I actually have eight containers set up by Apptainer, which is familiar with and understands the MPI command. And then it's orchestrated all the wire up and everything within the containers by exposing them all and doing all the right things. I think Jonathan and Forrest can both talk a little bit more intelligently about what's happening under the hood here. And this is where Jonathan has really done some great work as far as not just doing this on a single node on, on multiple CPUs, but doing this across nodes and doing it fully containerized so you don't even have to have MPI installed on your system.

MPI [49:46]

Johnathan Anderson:

So ultimately an MPI starts up multiple instances of your application and then assigns each of them an integer rank that your application can use to differentiate what part of the workload it's going to be responsible for. And then they can also communicate with each other by rank. And the struggle with MPI and containerized applications in the past has been that you need to start your application somehow, usually with a runner like MPI run, like what Dave is doing here. And that runner is usually bundled as part of your MPI implementation. So there's an MPI run or an MPI exec that comes with open MPI or with MPitch or Intel MPI or any of these. They all have their own starter.

And historically that starter has been kind of proprietary and bundled with it and then knows how to start up its own MPI. And that's it. More recently, and I think more recently means in the last decade or so, but more recently there have been efforts to standardize that process management interface starting out with PMI one and PMI two, these are well supported in queuing systems today. And the most recent effort for the Exoscale project is PMIx, to try and establish new ways, not only to start up hundreds of thousands of processes simultaneously and performantly but to monitor them when things go wrong with them, cause it becomes a real problem. And so we're trying relay some of our containerized MPI work on the standardized interfaces so that instead of needing an MPI runner outside your container, that is conceptually coupled to the MPI implementation that your application is built against inside the only interface is that PMI standard and using a queuing system like Fuzzball or any other would know how to speak that protocol and would not necessitate bridging that gap with a specific MPI implementation, which solves a lot of the problems that people have been having trying to do this.

It doesn't quite apply here because without a queuing system, you don't have a PMI client to run it with. So you're just using the MPI run that comes with open MPI cause it.

Dave Godlove:

Right, but it does kind of apply because at the end of the day, from a user's perspective, what this is going to mean is that you can take this container and just port it from one place to another and then just run it with your MPI commands pretty much transparently. And it's going to be able to spin up either on your laptop or either on an HPC system on multiple nodes or whatever you need it to do. So a lot of times what people do is they sort of develop their containers and they run them locally like I'm doing here. This would be like the first step. And then you say, okay, now I got something really big I want to run. And you scale that up and you bring that out to an HPC system or up to the cloud or whatever, and then run it in a bigger way.

I need to point out here that I pressed control C, this was not like a bug in the program, it's just, I heard my fans spinning up and it said it was going to take 36 hours. And so I said, well, that's a really long demo. So instead, let me just jump over here. Now I'm on a different machine, one with a GPU in it, and I've also got the ParaView viewer installed, and I just want to show you some of the actual fruits of this labor.

Car Crash Simulation [53:35]

So you could go ahead and do something like this. These are all the different time steps within the simulation that were saved as 3D models, which can be rendered using something like ParaView, which you can then step into and look around and try to figure out what was happening at a particular point in time. So I'm going to go ahead and jump into this. It's always a surprise as to where the screen actually pops up. And it's going to go ahead and load that and then we're going to be able to actually see this car crash model and what it was doing. Okay. And so there we go.

So we can zoom in on it. And I haven't played extensively with this yet, so I don't really know all the stuff you can do, but it looks like this model is a very, very detailed model, I assume with lots and lots of different materials in it. This is my first time running these types of simulations. One of the cool things about them is, so as you're running them, you start getting these messages that says like this shell ruptured and basically what it's telling you is, I am modeling a crack in this particular material. I'm modeling, I don't know, maybe your oil pan just cracked or maybe your radiator just split open, or maybe this is a new crack in your windshield that I have to model and then update the material, update the positions of the material. I guess it's like tensile strength and all this kind of stuff as I'm going through. And so it's a really complex and detailed problem which is solved here.

Zane Hamilton:

I did look at this model, Dave, and it's got 1,086 parts in it. It's incredibly detailed.

Brock Taylor:

And, it's probably not detailed enough, right?

Zane Hamilton:

Probably not.

Brock Taylor:

So in car design, especially in crash design, big improvements here, the engines are designed to absorb and push down in a frontal crash like this. There's other views that get rendered as well that shows where the stresses and the strains occur and you can actually see the colors and how the forces are mostly being directed around the cabin, not into or through the cabin. And it's the same way with the back. So I've had the pleasure of actually being in a crash where I was the middle car that got hit from behind and shoved into the car in front of me, and both the front and the rear of the car did an accordion. But the cabin was mostly untouched. It's amazing what you can see. And of course we've put a lot of crash test dummies out of work. They're digital twins are now doing all the work. It reduces the need you have to physically crash this car. You're really just validating that the simulation is correct.

Eric Lequiniou:

What is important is that you can, at the early design of the car, you can look at the simulation of the future behavior of the car. So you have some risks somewhere, you can change things at the early stage because if you wait to have a prototype to crash a car and you realize that you have an issue, then it's millions of dollars that you need to change the things at the last minute. You know? So if you can do it at the early stage of the design, it's very nice. And also you also want a greener car or safe car. So a safe car means you need to be sure that you have no intrusion into the cabin of the occupant. But a green car means you also need to optimize the mass. So you need to reduce the weight of the car where you keep a car, which is very safe. So this is the objective of what you can do at the early stage, and you can iterate running different samples with different change in the modeling across your cluster and running many examples of the Radioss to do that. So this is the idea behind the virtual simulation to optimize your model at the exchange.

Dave Godlove:

This is a model where some of the data that is saved there in the simulation, which gives you where the stress is at. And it's very nice here to see that the stress is actually mostly in the front and then carried through the frame and is not the part of the vehicle where the occupants are.

Brock Taylor:

I walked away from it. I was in shock and I had my camera. I could have taken a picture and I didn't. But other than the shock of being in that kind of an accident, I was very thankful for designers who use simulation.

Dave Godlove:

And once again, thank you so much to Yoshi for being, for putting that together and making it something that I could just like just a day or two ago, just spin up and go right through. And thank you so much for making this an open source project that we can do this interesting stuff with and play with.

Zane Hamilton:

Absolutely.

Eric Lequiniou:

I can say that it's an incredible contribution because this is not only containerization, but it's a demonstration of the whole from running input deck, which is another server directly with Radioss. Showing this functionality how to set up Apptainer, how to compile the source code. Therefore, the lesson about Apptainer was very insightful for me and also after the simulation to show how to use the converter to be able to run afterwards ParaView. So to have a really full tool chain of open source projects is excellent. And for sure with the open Radioss, we really want to tackle many different domains.

So you talked about materials failure, things like that. This is very important. Yeah, this is a classic car, but now we have electric cars with battery packs, so it's even more complex because you need to investigate what happens during the crash, but also post crash with let's say potential terminal runaway after the crash, long time after the crash. So it's a very complex world, and having the ability to progress on those different types of domains is very important. And one domain is how to make the code also more efficient, how to make it easier to run it, to compile it, to optimize it. And I think that, for Apptainer, is one important piece. I think containerization is one thing that customers really ask more and more. And the fact to be able to demonstrate how to do it very easily with Open Radioss is something very appealing that we can apply also for the commercial version and for commercial customers. Thanks a lot for doing this demo, doing this "how to" so very quickly and be one of our first contributors to this project and I hope to have many contributions like that in the future.

Zane Hamilton:

Thank you, Eric. And since we were up on time, I was going to have you give the last words anyway, but you did a fantastic job wrapping it up, and we really appreciate you spending the time and coming to talk to us. I appreciate all that you do, Yoshi, we really appreciate your work and effort too. Dave, thanks for putting it together. Jonathan, Forrest, Brock, good to see you again. Come back next week. Go ahead and like and subscribe, and we will see you later. Thanks for joining.

Eric Lequiniou:

Thanks a lot. Bye.