HPC by Industry: EDA (Electronic Design Automation)
Our Research Computing Roundtable will be discussing EDA (Electronic Design Automation) in the HPC industry. Our panelists bring a wealth of knowledge and are happy to answer your questions during the live stream.
Learn more about EDA (Electronic Design Automation):
Electronic Design Automation (EDA) is at the core of silicon chip design. Modeling elements like the logic, thermals, and signal radiation, EDA is actually a pipeline of computationally intensive workloads needed to create a competitive and cost-effective product. The opportunities to create specialized architectures for accelerating compute kernels have fueled growth in both start-ups and major silicon manufactures alike. Access to EDA tools in the cloud have similarly opened the opportunity to find the next disruptive product.
EDA is essentially high performance computing for high performance computing. The pipeline of workloads are executed on compute farms, yet many challenges can limit or even hamper the design process. It’s important to consider and balance various elements in the EDA process:
-EDA license cost
-Simulation throughput
-Infrastructure cost
-Optimizations per workload
This week’s roundtable will discuss some of these elements.
Webinar Synopsis:
Speakers:
-
Zane Hamilton, Vice President of Sales Engineering, CIQ
-
Brock Taylor, VP of High Performance Engineering, CIQ
-
Dave Godlove, Solutions Architect, CIQ
-
David Donofrio, Chief Harbor Architect, Tactical Computing Labs
-
Krishna Muriki, HPC System Design Engineer, KLA Corporation
-
Gregory Kurtzer, CEO, CIQ
-
Glen Otero, VP of Scientific Computing, CIQ
-
Fernanda Foertter, Director of Developer Relations, Voltron Data
Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.
Full Webinar Transcript:
Zane Hamilton:
Good morning, good afternoon, good evening, wherever you are. Thank you for joining us. My name is Zane Hamilton. I am the Vice President of Sales engineering here at CIQ. For those of you who are unfamiliar with CIQ, we are a company focused on powering the next generation of software and infrastructure, leveraging the capabilities of cloud, hyperscale and HPC. From research to the enterprise, our customers rely on us for the ultimate Rocky Linux, Warewulf and Apptainer support escalation. We provide deep development capabilities and solutions, all delivered in the collaborative spirit of open source. Today's webinar, we are going to be talking about electronic design automation, a topic that I know very little about, so let's bring on the panel. Very nice. Welcome everyone. I am going to go around, do introductions here. I am going to start at the top with Brock.
Brock Taylor:
I am Brock Taylor, Vice President of High Performance Computing and Strategic Partners. I did spend a few years at both Intel and AMD and will be very dangerous on the subject today of EDA as while I lived on the software side. I do have some exposure enough to probably get a few points wrong but hopefully the purists out there and this silicone designers will forgive me.
Zane Hamilton:
Dave Godlove.
Dave Godlove:
I am Dave. Godlove. I also do not know a whole lot about EDA. I used to be a neuroscientist at the NIH. It was there that I became interested in high performance computing, and I became a staff scientist working at Biowulf, which is the intramural resource at the NIH for high performance computing. It was also there at Biowulf that I first came into contact with Greg and started talking to him about Singularity, what ultimately became Apptainer. I have been in that community for some time now, and I understand that containerization is used extensively in the EDA process these days. I guess that I can maybe comment a little bit on that side of things. We will see.
Zane Hamilton:
Great. Thank you. Dave. Welcome.
Dave Donofrio:
I am Dave Donofrio. I am the Chief Harbor Architect for Tactical Computing Labs. I am a also a former Intel inpatient. Also, more recently, I spent about a decade at Berkeley National Labs running their computer architecture team and doing some fun HPC stuff there. I had a brief stint at Apple, which we do not need to discuss but I have been at TCL now since 2019, and we are building a whole bunch of cool, risk five based stuff for everything from embedded all the way up to the biggest HPC.
Zane Hamilton:
Very cool. Thank you Dave. Krishna, welcome back.
Krishna Muriki:
Glad to be here. Hello everyone. I am Krishna Muriki. Right now, I am with the KLA Corporation. I am working as a System Design Engineer, HPC architect here at KLA Corporation. Before here, I was at Lawrence Berkeley National Lab and had a lot of overlap with Greg and David too. I was part of the team running the research infrastructure over there at the lab. EDA, I have not worked in the EDA field directly but in the user support role, we supported EDA applications on the research clusters at Lawrence Berkeley Lab and the role here at KLA Corporation, we are not directly in EDA. We make wafer inspection devices here at KLA, the wafers that are fabricated in the labs in Taiwan. Once they are made, they come out of the lab and we need to inspect if it is according to the design or not.
It is like a powerful microscope in which we build a lot of camera gadgetry, which I do not understand. The data gets fed into Linux cluster. That is where I am sure all of us understand and speak. I am in the team which builds this subsystem. Then the next cluster subsystem part of that big wafer inspection tool that the care builds does, is that EDA? I do not know. To me, when we talk about EDA things like Ansys, the Console, those are the things that come into my mind. We use those tools a little bit here at KLA Corporation. A lot of research scientists at Berkeley Lab used them, and we supported those applications stack on the research clusters. It will be a fun discussion. Glad to be here.
Zane Hamilton:
Great, thank you Krishna. Greg, welcome back.
Gregory Kurtzer:
Thanks. Hi, everybody. My background with EDA is like many things, mostly on the infrastructure side. There are many EDA facilities and clusters that are running things. Obviously, Rocky Linux, CentOS over the years, but also Warewulf. We have gotten a lot of reports now, or many reports of, a lot of desire to properly containerize a lot of the EDA applications coming from the industry. That has been kind of my side of the experience on it, basically working to help with the application side. But, yeah, great discussion. The EDA is a very interesting space within computing, I am looking forward to kind of jumping into it and seeing what everybody has to offer. Thank you.
Zane Hamilton:
Absolutely. Glenn.
Glen Otero:
Glen Otero VP of Scientific Computing, Genomics, AI, and Machine Learning at CIQ. I know just enough about EDA to be dangerous. I know it is really expensive applications and when they start their spins, they are running full boar on their on their processors, just squeezing every bit of performance out of it because the licenses are so expensive. I am interested to hear more on what the other experts on EDA here know.
Zane Hamilton (06:51):
It is interesting. Thank you. First off, Brock, I am going to ask you, can you define EDA? What is electronic design automation? It is a generic term.
What is Electronic Design Automation (EDA) [07:01]
Brock Taylor:
It is actually a great question. There are a couple different answers. The Hyperion group, for instance, in their taxonomy, I think they have a broader definition of EDA, which somewhat expands infrastructure. I think in this context, we are literally talking about the pipeline of applications that go into silicon design, right? That is the predominant area. Krishna, I think you were talking about some of the major players. These are giant corporations, Cadence, Synopsis, and Mentor Graphics have built massive businesses because chips today are extremely complex systems. I did part of my time at Intel as a power on BIOS Engineer. You are talking about after years of design and simulation work, the physical processor actually showing up in a lab and for the first time you are trying to turn it on, right?
This is a monumental effort for companies like Intel and AMD and literally it is a years long process. The window, what you are getting into, closes well before that product is ever actually manufactured for the very first time. There are so many different things that can go wrong and so many different things you have to consider going into it. Coming back to the question of defining EDA is really going through all the different simulation steps that you have to go through over and over again to produce that piece of silicone, which especially if it's an SSC system on chip that has lots of individual pieces that are all coming together. Everything has to be simulated. It is not just testing out logic and making sure that with the clock speeds of the processors, that your gates are all going to flip fast enough that you do not get feedback loops that send you into the ether, but also now as important the thermals, the power consumption, how that reacts and how chips change under the different workloads, the different stresses, and what running hot means to the processor versus running cold.
I look at EDA classically as it is mainly the tooling and the process of building that silicone and the applications that are there. I will add one thing to this discussion. David, you might respond to this as well, it is hard to get the people who design the chips to be able to come into a form like this and talk about it because in this case, when I say it is a dark art, it is actually a secret art, and companies like Intel, AMD and Nvidia, the major silicon manufacturers this is their core bread and butter IP, so they are very guarded about how they can talk about it, and you want to talk about your innovation. It is one of those elements. It is funny because EDA is high performance computing for high performance computing, meaning you are actually using the products themselves to design the replacement products and what the silicone is going to do. A long-winded answer to what is EDA.
Zane Hamilton:
That is perfect. Thank you, Brock and David, I know you have something to say, Krishna too. Ferna, you just joined. If you want to introduce yourself, that would be great. Oh, yeah.
Fernanda Foertter:
I am Fernanda Foertter. I have been in HPC for a while. I used to work at Oak Ridge National Lab, which is how I got to know a lot of folks here. I used to do training. I was training lead there, and at the time when we were all transitioning from CPU to GPUs. That is how I ended up meeting lots of folks in HPC. Now, I am at a startup called Voltron Data. I cannot talk too much about what we are doing here at the startup, but I can talk about the open source side of things. My role right now is Director Of Developer Relations. We are helping support ecosystems that are going to help with our product eventually. It includes the Apache Arrow Ecosystem, which is data transfer, and other ones that have to do with connectivity of data, data manipulation, ETL ML preparation, et cetera.
Zane Hamilton:
That is great. Thank you. All right. Back to you, David. I know you wanted to add to what is EDA.
History Of EDA [12:07]
Dave Donofrio:
First I wanted to give a quick shout out to the presilicon validation folks who are using those EDA tools at Intel. For years, one of my main Dev systems was a step prototype CPU that I pulled out of some driver Waterfall cube. I was always very impressed that a step, Silicon Works boots, I was Windows Server 2000. I think I am some very ancient thing, but that is really, I think a culmination of a ton of work and being on the Power on Bios group, I know I have a sense of what you went through. I think the EDA for HPC. It has classically been very compute, time intensive.
These algorithms, I am sure everyone here knows, are NP-Complete or NP-Hard. Optimizing them is really difficult. There are a lot of heuristics. I think there are a few interesting things happening. There is obviously the AI machine learning that, heuristics, AI could be a nice match. Then, there is also the growing open source EDA flow with open lane and open road, where maybe that veil of secrecy, you mentioned Brock could start to get peeled back. Those tools are still, of course, in their infancy but they work. People are building real chips with those. Perhaps, that is a place where some interesting algorithm optimization could happen.
Zane Hamilton:
That is very interesting. Thanks, Dave.
Evolution Of Chips [13:46]
Krishna Muriki:
In terms of the relevance, the importance of EDA tools, one other data point that I would add which I got familiar with recently is these, the chips these days are not two dimensional anymore. They are building in three dimensions, you can think of the transistors and the layouts on the chips are like commercial office space buildings where you have, it is like a skyscraper of transistors on the chip, and it is not two dimensional. Designing those kinds of chips, visualizing and making sure we are meeting the timing deadlines, it is magnitude complex. These area tools play a real critical role. It is before the
Zane Hamilton:
It is interesting you say that, Krishna and back to Brock's point. When I was researching this last night and looking around, there was someone that gave an analogy of chip design being like building a 787 airliner. They had six years to design and test. They build the airplane, they can physically touch pieces of it, they can go break wings, do all that stuff. Whenever you are going through this process and actually building the silicone, you have to have the equivalent of a 787 come off the line, and the first flight it takes is having passengers, and you have to do it in 12 months, and you have to do it again every 12 months after.
Krishna Muriki:
Yep,
Zane Hamilton:
It was mind blowing to me to think that you are getting that complex and design and you have to do it that fast. That was very eye-opening when I started looking through this. I think we have touched on where it is used, but Krishna, you touched on one thing, which kept coming up and it was Moore's Law. And, and that, how that line started off very flat from the 2000's , and then it finally started going almost straight up. Where are we today?
Moore's Law And EDA [16:06]
Krishna Muriki:
I do not know the numbers. Brock, maybe, you know how many transistors were packing these days?
Brock Taylor:
I was actually, I had to catch myself because I was thinking how many are there? You know, is it billions? Is it tens of billions? Then, it is, well, there are transistors, there are gates you know. I actually googled that question earlier in the week, and you get all kinds of answers, but it really comes into, it used to be, Moore's law was as much about increasing speed as well as the silicon. Now, it is definitely shifted to more, call it width of computing. It went from a single core that ran infinitely fast to tons of cores that are all running reasonably fast. What you are seeing is a lot of combinations that are coming down the pipeline. I am going to be a little careful because, it is a little difficult having worked in both Intel and AMD what I can say what I cannot say at any given time.
To your point earlier, there is a line you can say, and there are some things you have to hold back, and sometimes it is hard to hold back. I know AMD for instance, has announced they have some ideas of now adding on an SoC. You have specialized chiplets, not just CPU chiplets or GPU chiplets, but combinations, which are coming in. I have to think that just elevates the validation nightmare to another level, right? It is a massive amount of stress for these people. Even on the validation side, we were making software for something that did not exist, but you have a basis. The pressure on the people actually doing that design and doing the presilicon validation before it is ever manufactured, if you get it wrong, everybody knows about it. Everybody in the company knows you got it wrong. If you get it right, nobody, nobody ever really hears about it. It is very tough.
Gregory Kurtzer:
I cheated, I went and looked at Google, 290 million transistors, using Intel 65 nanometer process.
Krishna Muriki:
Yep.
Brock Taylor:
65 nano process.
Krishna Muriki:
These days we are at a single digit nanometer. 14 nanometer is the latest.
Fernanda Foertter:
65 was like 14 years ago, 15 years ago.
Yield On Wafers [18:59]
Krishna Muriki:
Transistors per wafer, I think. Yeah. I said I do not remember, but as I think for more, I did hear that the transistors per wafer that we are having to measure these days is in trillions per wafer, not per chip, but wafer is in trillions, not in millions anymore. It is in trillions the number of transistors that we are measuring. It is huge.
Brock Taylor:
That is a part of the EDA process I do not know how much is simulation, but yield on the actual wafers is a massive element that silicon manufacturers and foundries face. I have to think, again, I do not actually know that part very well at all, but I have to think that it is at least a good stage of the pipeline when you get the designs, what is the projected yield, right? I do not know how much simulation there is. Again, it is a pipeline. It is a whole bunch of applications that go into designing silicon. It is not just one application, which presents a lot of complexities to what you run that on, right? They are different applications and just like the broad HPC space applications have different needs and can be optimized for different parts of an architecture or different systems.
When you have got 10 different applications, the chances you have a system that is optimized for all 10 is pretty small. You are looking at decisions that the companies have to make. Do you optimize for the RTL verification or validation? Do you optimize for a different part of the pipeline? Again, are you running this in-house? If you are running it in your own data center, how much of a resource do you have to commit to this? I definitely cannot comment on how much of a resource Intel or AMD provide on that, but I can tell you that at least 15 years ago or so, that were the primary elements that were run inside of their own data centers. These are literally fortresses to build, to keep people out. Because again, it is their core IP, right?
It is what they built the company on, and they are very protective of that. Cloud is a really, I would say, sensitive topic that comes into silicon because of the security. Cloud provides a great advantage to silicon design because you can expand based on what you are willing to pay for, how much you need in a given timeframe, but you are putting your IP in a public cloud or, or somewhere. People get really nervous about that, even though they are very secure environments. It is not their building, right? They do not control the security. There is a human nature to that.
Zane Hamilton:
I have several other questions, but I know, Fernanda, you had something you wanted to add about the chips and design.
Chip Design [22:18]
Fernanda Foertter:
I have thoughts about the future of chip design. I am terrified of chiplets. Anybody that has ever tried to do heterogeneous computing and development for heterogeneous computing knows how hard it is. So much of this is going to be dependent on how it is implemented by the chip manufacturer. The chip company, right? I have worked with two now, and Nvidia and this other startup called NextSilicon. So much of that is going to go into runtime. So much of that is going to go into actual system runtime, and you are not going to have a lot of control where it lands, right? You might have some control, but I do not think in the future if you start adding these little tiny pieces, like you have, you know, L1 caches and L2 caches, at some point you are not going to have control of where it lands.
It is going to decide for you. Then you are playing this game of trying to trigger this run time to put it in the right place, right? Or maybe, there will be a programming API or something like that. Or you can say, okay, no force this to go here, not there. Chiplets design in general, let's take a step back on chiplet design. There are so many moving parts. I was floored once I got a view into it. Because, you come from the software world, you are up here, you have the emulator, you have the actual FPGA simulator. You have the actual person that is doing the little design with the little clicky clicky, and it is an entire file and it is huge. Somewhere along the way, these scenes kind of get out of sync. You have to make sure that you simulate your emulators in sync with the person that is actually doing the chip design. Somewhere along the way, places and stories that I have heard is somebody forgets to reconnect something and your entire batch, and your six months out, eight months out with this order, and your entire batch is garbage now because somebody forgot to reconnect it or fix something because they added a feature last minute. Now, you do not have that circuit, and now you have to build software on top of it to circumvent what happened.
Zane Hamilton:
That is terrible.
Fernanda Foertter:
It is insane that we even get anything that works. Now, when I think of that process, just for a regular old, mono type chip, whatever we are going to call it, right? Imagine that with chiplets on the same packet. Insane.
Brock Taylor:
You summed it up better than anybody I have ever seen in a tweet you did about a month ago on this subject. I wish I remembered exactly what you said, but it was, you had it right there. It was just like, you thought it was hard today. Wait for this, what is coming? It is amazing that these things actually work. I am convinced nobody, no one person actually knows how it all goes together, except maybe for Clint. He knows. He knows all.
Zane Hamilton:
This brings up my next two questions. I will go a little bit in a weird order. When I was looking at this, and Krishna, you mentioned this earlier, talking about fab, and I saw in 2020, it cost about 10 billion to build a fab. This is what it cost, there were only three companies, which were really doing it at the time. Of that 10 billion, they depreciated a hundred dollars every minute. So, to make them viable, you had to get $500 in revenue every minute, which just made it a very odd business model. The entire supply chain that fed that was built around that model. All the parts were the same no matter what vendor you were going to. It was a very controlled and very small industry. But being that expensive to Fernanda's point, what is making it more expensive, the more complicated we get, we are only going to drive costs up, I am assuming. There are a lot of factors in what makes it expensive. It is not just the fab itself, there is the design, the people, the software. What are all of those things that we're going to have to change or make more complicated? How's that going to drive that cost up?
The Drivers For Chip Fab Costs [26:23]
Fernanda Foertter:
Michio Kaku was at SC 17, I think it was. He said in the future, chip design will be the easy part. Everybody will have their own chip. I think he was right about that. He is sort of a futurist. He is out there, he has awesome hair. I am not into that part of Michio, but I think he was absolutely right about that. I think the design itself, anybody can just basically play with some circuits and design something today. In fact, that is probably more accessible and easier because a lot of what you get for the design software, you already get the package, you know what is coming from that specific fab and you know what things you can put in there. We can only do this and this is the size and this and stuff.
You sort of already have that framework. I am not saying it is going to be easy, but it is definitely much easier today to create a lot of variations on these things. It is the latter part. It is trying to create some innovation on top of it, number one. Number two is trying to push the envelope and say pushing the fab to create something that is lower powered, right? The requirements for power today still remain pretty high. At least we want them to be low, but the push for low power is pretty high. Then, what I think makes the whole thing super expensive, it is just a fab process itself, which is very, as a material scientist is super expensive, right? Everything about that process is super expensive. Just the materials themselves are super expensive. The liquid, the chemistry's super expensive, right? I do not think we can cheapen that in any way. The design, I think we can, but that latter part, I am not sure that we can make it any better.
Zane Hamilton:
The more complicated you make a chip, you are not going to be able to necessarily speed up that process. Facilities built it is what it is until it has depreciated and goes away and you build a new one. I was also reading that those are cleaner than a hospital operating room, which I am not saying that those are the cleanest places on earth, but that is pretty fascinating. They have to be kept that clean, that they are recirculating the air in the entire facility every, I think, six seconds. It is so very fascinating to me. I kept going down the rabbit hole.
Brock Taylor:
It is a part of the full process. There are many chemical engineers and biomedical engineers that are in huge demand especially in Ohio right now where the new fabs are going into construction. The air quality is a constant monitor job. They are constantly looking at that because any impurities can sap the profitability of a fab. As well, you are talking about product lines that have to be running for years, right? As they are selling these products into industry, industries have to rely that they are going to be able to get replacement parts for years. Again, Greg brought up 65 nanometer, I am fairly certain there are still 65 nanometer products being produced because there are consumers of them. You have to have all these different places.
That is why you have to have multiple fabs and you are constantly building the next fab for what is coming out in three years because it has to have a different process, right? I think the software side of it is as scary as what is going on in the hardware. Because more and more, you are just not necessarily going to know every in and out of the thing. When I think developers, if they are not losing sleep over it, they will be. There is going to be 80 different architectures to choose from and support because it is becoming easier for more people to prove specialized silicone, not not huge CPUs, but a small chiplet or something that connects to CPU that targets one kernel in a graph processing or an accelerator for one type of algorithm, right? Developers are always having to learn how do I handle all this stuff? Inevitably, you are going to be relying on a library that does some magic and literally write some code, yada, yada, yada, get the answer right? In between what is going on? I am glad I am not developing.
Zane Hamilton:
Glen, I think you had something you wanted to add to what Fernando was saying.
Chips Designed At The Molecular Level [31:11]
Glen Otero:
I just want to point out that Brock's yada yada yada over the algorithm and the new name for my mouse is Little Clickity Click. Thanks, Fernanda. I serendipitously learned today as of this morning I was talking to a silicon vendor that I will remain anonymous, which has crossed my path. OpenEye software, which is a software company that creates software for designing molecules. The molecule design that is used by all, if not almost all, pharma companies to help them design drugs. This company was purchased by Cadence not long ago. The reasoning is they are going to use OpenEye molecular design software, so they can actually now start designing chips at the molecular level.
They also want to pair it with their finite element analysis and some solvers that Cadence already has to increase, and to continue to improve the process. The pressure to Fernanda's point, about trying to innovate trying to make it more cost efficient and things like that as you get into more and more complicated designs is now they are pulling in drug design or I should say molecule design software. That is all I got for EDA. That is everything I know about EDA.
EDA as a Niche Industry [32:52]
Krishna Muriki:
I want to emphasize Zane, what you said a few minutes back, that this whole industry is still niche, not a lot of players. I mean how many fabs are there? UMC, Samsung, Intel, Micron, big players. I know a lot of small players are coming up in China these days, but it is so niche and it is so critical for these very limited number of fabs that exist in the world to feed and produce the rate at which they are producing the chips. We need chips for each and everything these days, right? I mean, the number of chips that go into an automobile, the whole car industry got impacted. Automobile industry got impacted because of the supply chain issues at the fabs. This is becoming so critical, and that is the reason why the CHIPS Act has how this government has realized how this industry has become so niche and located in some areas in the world only. If those areas are politically sensitive, then the whole world economy can be impacted. That is the reason why the CHIPS act came in. The other thing that I wanted to stress was the fact that any impact to the production lines for short periods of time, either because of this, as Fernanda said, maybe some design engineer forgot to connect to different things.
The impact of the production line is thousands and millions of dollars. If one line at the fab goes offline for one hour, I think I heard something like the cost of three houses in the Midwest. That is the cost that they lose, that the fab loses in one of the pipelines, it goes offline for one hour. They have a lot of pipelines, and the whole fab is out then yeah, easily go into millions and of loss productivity laws. It is becoming very critical. All the ecosystem, the tools, the inspection devices, the products that they manufacture are all very critical. This is the reason why the cost is so high. Because of the very niche players, very few players, and the productivity of those players is so important. Any tools that get into that pipeline are really expensive, because of that.
Zane Hamilton:
Absolutely. I saw somewhere that a fab, like an average fab, can spit out 50,000 wafers a day is the average run rate for what they are doing, which is astonishing to me. David, I think you had something you wanted to add.
The Cost Curve And Moore's Law [32:52]
Dave Donofrio:
I wanted to chase the cost thread a bit. It seems like there are a couple of ways that Moore's law could go, right? There is the increasing complexity and density of chips. We are getting to molecular scale and other maybe acts of desperation, depending on how you look at it. There is another curve that we could latch onto, which is the cost curve, right? Can we make existing chips that are quite powerful? I mean, a 67 nanometer process. You can do amazing things. We look at the technology we have today, and will the cost of that become less? We started to see that in the EDA tools. I do not think that comes as a surprise to anyone.
If you go to Synopsis or Cadence and say, I want your latest tech from five years ago, it costs a lot less than the latest tech for today, right? I think I can say that without anyone yelling at me too much. Is it possible that maybe this cost curve will be the next big revolution? Maybe, it could combine with what trends we are seeing in a lot of specialized devices throughout computing, heterogeneous computing has certainly been mentioned. To start to come together to create a new revolution in computing.
Zane Hamilton:
Thank you, David. That brings up another interesting point that I want to ask. When you start looking at things that exist today, how can we optimize or make them better? That really brings to my mind, how does HPC actually play in EDA? I think we touched on it, but what does that look like?
How Does HPC Play In EDA? [37:34]
Dave Donofrio:
I will start because there is a great example that, again, leads to that phrase HPC four HPC, and that is AMD's Milan-X product. Krishna talked about 3D Die stacking. AMD's first introduction was Milan-X about I guess we were close to a year and a half ago, when it launched. They took the standard Milan or Epic 7003 series processor, and they took the, on each chiplet, the L3 cache and stacked three of them on top of each other, right? They literally created that processor. One of the primary targets of Milan-X was EDA. You have a high-end server built to run, very specifically high performance computing workloads in a few areas, targeting an industry to build silicon.
I think Mark Papermaster at AMD just did the keynote of DAC back in July. Prior to that leading up to the Milan-X launch I think AMD went to hot chips and they talked about the design process. The fact is they were very open about it, they are fellow travelers in EDA using EDA to produce chips for people to do EDA. It is a really interesting story to talk about. And with the Die stacking part of this was really modeling the thermals as you are putting the chips under workload and how you keep that chip cool. Then, it comes into how do you make that cost productive? Because cache, high bandwidth memory, it is all varying layers of complexity and cost.
The bigger those cache, the more expensive, it is not even a linear cost. It goes up very steep. High performance computing, again, is the various phases of the EDA pipeline themselves. Computationally intensive workloads, I believe rtl, the run level or the logic validation is a big chunk of that. The Milan-X product from AMD showed a massive jump. It was like a 66 to 70% performance improvement. You are talking about either a much shorter time to run the same amount of simulation or a whole lot more in the same given time. Either one of those scenarios can be very beneficial, but it is part of the pipeline. Different phases solve different benefits out of the silicon. You have to understand a whole lot.
That just gets right into what is HPC. It is a massive amount of all these complex pieces, hardware and software going together. It is understanding how the workloads need to land on the right solution that is best for it. It is also people developing the applications, having to handle all these different nuanced architectures. It is an ecosystem problem that requires an ecosystem of partners. It is not a one-stop shop that can give you everything. I think it is a combination of cloud, the combination of the big players, a lot of people are able to design pieces of silicon as Fernanda's tweet points out. There are going to be massive layers of complexity for developers, but at some point, you are going to see these things come together and magic will happen and it is going to work.
Zane Hamilton:
Thank you. Krishna.
Krishna Muriki:
If it is very clear obviously the computational complexity is there and HPC is needed in, wanted to extend that one more level and say, EDA field is not only attracted to HPC, but they are also heavily attracted to using containers. Singularity was a big hit in this field. Primary reason what I observed this industry fabs, EDA tools, EDA tool manufacturers being so critical, they are not, I feel they are not very agile. They do not adopt the latest and greatest ways of building software, building infrastructure. When I had to support install these applications like com graphic tools on a model Linux cluster, which is running Red Hat or CentOS, it was a nightmare. It was not easy. I do not, who builds software in 32 bits still, they still use a lot of libraries, which are two bits statically compiled.
There is no source code available. I just have to get the binary and make the binary's learn on a 64 bit environment. It is not easy if I go to the system administrator and I ask him to install this package. It used to be in libraries. He would look me up and down and he is like, get outta here. The way this software is built is still, it can use a lot of improvement. I am sure the priorities are at a different place where they need to ship out these products quickly to meet the next innovation that is happening in the cheap industry. Maybe, their priorities are different. That brings challenges for common people like us who are trying to support these applications that containers came in as a savior there. I mean, if I need to make my own container with my own voice, with all the packages that I need for this particular EDA tool, it is easy for me to do that and take it onto a shared HPC cluster with admin will not let me even touch the pristine. They, oh yes, I can bring my own OS with all the packages that I wanted. Freedom, flexibility, and getting things done. That attracted this user community to containers real early on when Singularity came out. I wanted to express that point. Total adoption of HPC is obvious and it is the reason why containers also were adopted by EDA**.**
Zane Hamilton:
Absolutely, Greg.
The Cost of Software vs Hardware [44:44]
Gregory Kurtzer:
This discussion has been enlightening to me. I have worked with EDA tangentially as I have helped on the infrastructure side, and I have heard things said EDA is one of the few areas inside of computing in which you typically and commonly spend more on software than you do on hardware. As a matter of fact, I think it is about the number that I got was about five times the amount on software. There is a huge amount of legacy that is built into some of this and that legacy is required to some extent because things work. Nobody wants to change it. This is such a high risk market in terms of the amount of capital going in and out of it. You do not want to risk change, you do not want to risk doing something that may break this.
It is more important to make sure you have something that works to Krishna's point about containers. In conversations that I have had with people that are doing EDA work, there has been a high interest in wanting containers. I can tell you a hundred percent, most of the consumers and providers have reached out to the vendors of EDA software asking for containers. Some of the reasoning that I have heard back usually has to do with things like the vendor also does not want to change or they are more concerned about it circumventing licensing somehow. We have to properly do license management and whatnot and facilitate that through the container to make sure that they feel comfortable with that. But also to Krishna's point, the amount that this would help everything if all of these applications were containerized and they can provide a basic standard.
This is the container that is going to work for this, this is the container that is going to work for this, and then you can do so much with it, whether you are using Singularity Apptainer, whether you are using a Kubernetes environment, whatever you are using. You now have a platform that you can take these applications and go and easily create building blocks that you can go and further extend. The EDA market is a really interesting one within computing. Again, I appreciate the discussion because I learned a lot about why is it that way, things I did not realize. Thank you.
Zane Hamilton:
Greg, your point on that, I guess if you look at licensing is always going to be an issue when you talk about software. It is always a difficult solution to solve. If we start looking at a better way of licensing software or paying for it, I feel like that is something that people have not done a lot of. It has typically been in that legacy model of I have X number of things. I think there is an opportunity there to, maybe, maybe I am wrong,
Gregory Kurtzer:
Especially as we start thinking more about cloud native computing environments. A more traditional high performance computing system of static clusters, static nodes and whatnot. You can manage licenses through that. Like we know how to do that. We do not really know how to do that in a cloud native way, especially a federated cloud native way. That is going to need some research development and then some, well, some legacy to basically give people confidence in terms of how it works, how it operates. It will just take time.
Compliance Issues In Terms of Public Cloud Adoption [48:17]
Zane Hamilton:
I believe we have some questions from the audience. Art would like to know more about compliance issues around IP issues in terms of public cloud adoption with large designers like Samsung and TSMC. Maybe that's too tangential or too big to address anybody who wants to take it.
Brock Taylor:
I am not sure about compliance in terms of aerospace or automotive have to face. In general, I think there is an element of confidence in what you are running in the cloud versus what is known to run on premise is the same. I think that does relate back to what Greg and Krishna were talking about in containers. And you know, I think of that hypothetical where the silicon issue is found during validation and the fab line goes down or is halted and that is throw a number on it and we are probably too small, but think millions a day that you are losing when that fab is down. Then, you have to turn around and find a fix and you have to validate that fix. If you are on premise, you have fixed resources, this is a great place where, hey, we need to do the same amount of validation, but we have days to do it in not weeks.
Turning around, going to cloud where you can scale out it much higher dimensions, but you also do not want to lose any time on spinning up the environment. Having something containerized that goes place to place, I think that fits well with what I would call compliance for the contraction with the actual designers. I am not sure if there is compliance issues around the IP in most cases, unless you are getting to that silicone that is mission critical. Something that is life or death. That is definitely outside of my purview.
Zane Hamilton:
Fernanda, I think you had something you wanted to add.
Fernanda Foertter:
I interpreted that question as having something to do with export control and some of the IP that is generated in the US. We have seen in the news how much we are trying to control that kind of technology, making it to China or to other tier four or tier three class light nations that the US considers hostile nations. If these folks are coming from Taiwan and they are a global company, right? How do you keep that IP here? Where is it being generated? Those questions are much muddier today. I think ultimately the government will have to be happy with the lack of export of the actual technology itself. It is going to be impossible for us to keep out of the hands of players like China, because China is a big country. They have raw materials, they have the source, they have the smarts, they have an extremely large pool of very smart people that can recreate all of this. It is going to be nearly impossible, if anything is probably going to accelerate their excellence and, and their ability to create their own chips. We have seen that already.
Zane Hamilton:
That is great. Thank you.
Krishna Muriki:
At least in the case of fabs, I noticed, they are really tightly controlled as a hardware vendor. I notice that any drive that goes in, you will never come out. They have their own way of destroying it. As a hardware vendor, if we have to rebuild the raid storage system and for the rebuild operation to happen, if we need to hold the data on the life files system temporarily onto a temporary storage, so we take some drives in server chassis storage, server chassis in for as a temporary backup of the data while the rebuild is happening. We cannot get this spare out. This spare has to die in the path. The controls are so strong. They would not be looking at using cloud, anytime. I think everything is on premise within the past, all the compute that needs to happen. EDA will be slightly different. Some EDA companies will have presence in the cloud and all these questions about export control.
Gregory Kurtzer:
There is a bigger side of this as well, which is there is a stigma associated with running on other people's hardware and other people's resources that cloud providers have done a tremendous job in terms of managing compliance and really doing a lot to validate that. There is an emotional side of this, which is, may not be a technical piece, but where people do not necessarily want to put their most valuable data and algorithms and components of what they are working on up into resources that other people may be actually sitting on as well. That control of security is out of their control. This is much bigger than EDA, but I think the cloud vendors have done a really good job in managing this. It is just going to take time for people to get over the emotional concern that they are going to have about running on other people's resources.
Dave Donofrio:
Thank you, Greg. Is there another question
FPGAs and Altetra and Xilinx [54:18]
Brock Taylor:
There is one I see, somebody asked about FPGAs in the future of FPGAs with two of the main major players, Altera and Xilinx. I think you are being acquired by Intel and AMD. I do not think you are going to see support of that ecosystem start to dwindle. I think you may actually see it increase. It is just taking time for that to happen. What I see is FPGAs are really well positioned and will stay there for a lot of exploratory work that can be put into play very quickly. Be a little careful again, with what I could say and what I can't. Expect that FPGAs are going to be closer and closer to CPUs and accelerators. They are going to become a more integral part and you are going to see, I think, more people that can employ using FPGAs elements to augment existing compute resources as well as just prototype. You see a lot of FPGAs and prototypes, but it still may take a couple years for that to really start to get mainstream, but you will see more of it, not less.
Dave Donofrio:
Thank you. Brock. Anybody have anything to add to that?
Dave Donofrio:
There is certainly a hope that the FPGAs tool like every EDA tool has been the consternation of most people who try to use them. Now that they have been swallowed into these large organizations, perhaps, the tool can just improve, as more people become end users within the company, they may start to demand the tools be better from the people who have developed them internally. That is one way they could potentially improve. I am hopeful that the FPGAs ecosystem will continue and improve and certainly there are the large prototyping systems from Cadence that you utilize FPGAs there is open source things like FireSim that run on AWS today.
There are a number of large efforts to start to use FireSim in a broader sense in some upcoming IARPA programs. Certainly they are going to be using FireSim rather heavily or if that is the plan. It would be interesting to see if some of those solutions can combine the scalability of FPGAs in the clouds with some of the advanced hardware generation and higher abstraction level, ways to test out new architectures and new designs. There are interesting things happening there.
Zane Hamilton:
Very interesting. Thank you David. Fernando, I think you had something you wanted to add about tools.
New Tools [57:33]
Fernanda Foertter:
On the tools front, I just heard now that after seven years since Intel bought Altera, they are rolling some tools or at least they are making one API as part of what you use to program the FPGAs. I just do not see the tools really being prioritized for this. It is also a much smaller market. I think Nvidia is something like 10 billion in revenue and Xilinx and Altera something like 1.52. Their primary revenue source is not going to be deprioritized for FPGAs and still remains niche. It may be that they hire these or acquire these just for the talent because of chiplets, because they want to be able to have somebody that knows how to design things and knows this kind of circuit work or maybe they can create the simulators. Maybe that they got them because they know the complexity of their chips are going to go up and not necessarily because they want to create newer better FPGAs.
Dave Donofrio:
Certainly, FPGAs are niche and then the use of FPGAs are niche EDAs niche and the use of FPGAs in EDA is another even smaller niche. Most of the revenue is the signal processing. The cell tower is the other types of uses that the way we want to use them for presilicon validation, architecture exploration is not really on their roadmap. That shows up in the tool flow. If you want to make one design and push it out, if it takes a couple days to synthesize and does not really matter, but if you are trying to iterate your design, then those long synthesis times become a real pain. There are to, I guess now AMD's credit, when they used to be Xilinx there were some open source flows that were produced by Xilinx to kind of address some of those pain points, like the rapid right flow that they developed, which is quite helpful. They are, I agree that the movement on the tools is always too slow. It is always too slow.
Zane Hamilton:
Thank you, David. We are actually up on time. That went very quick. I feel like we just started scratching the surface and we are already out of time here. I will do what I usually do. I will go around and give everybody one last comment and I am going to start with Fernanda.
Fernanda Foertter:
I do not have any comments.
Zane Hamilton:
I will come back to you at the end. Maybe somebody will have said something that you want to take off on.
Zane Hamilton:
Greg.
Gregory Kurtzer:
I would love to see more movement of EDA tools to be easier to deal with. I think this is probably stealing Krishna's punchline here, but I would love to see them being easier to deal with. I would love to see them being a little bit more modern. It has gotten so much, and again, this is what I have heard from friends of friends, but it has gotten so bad in some cases. It is like one part of your workflow needs to run on SUSE. Another part of your workflow needs to run on REAL, not just REAL, but REAL 6.70. Then you have to pull all these together. If you are not using containers, it is actually a very difficult thing to do. If you are using containers, you still have to come up with a way of how are you going to pipeline these things together in a cohesive way, if only there was a cloud native computing platform to do that. That is where I think we need to be thinking about. It is not just for EDA it is across the entire industry, but I would definitely like to see the barriers still coming lower and lower in terms of people being able to make effective use of these tools and these applications. In EDA, we are all being held by the vendors for good reason. But at the same token, I would love to see these vendors take a more active approach to this.
Zane Hamilton:
That is great. Thank you, Greg. Krishna,
Krishna Muriki:
I think I pulled everything that I know about EDA already out of my head. Great discussion. Enjoyed it.
Zane Hamilton:
Thank you Krishna. David?
Dave Donofrio:
I will say that I am excited to watch the open source EDA community start to grow up. Not only for accessibility, but there is a whole Slack channel with hundreds of, maybe even over a thousand people that are all using the flow. And we can ask questions. Just that alone is an amazing step forward. You do not have to dance around. Would you have this version from Cadence? And can I talk to you? Because we do not have NDAs. That is a very exciting development is all this open source hardware and open source EDA.
Zane Hamilton:
That is very cool. Thank you David. Mr. Godlove?
Dave Godlove:
I do not think that I have commented at all. I have been really happy to sit in on this conversation and to just be a fly on the wall and listen in on this. I am really interested, especially by, obviously by the talk that I have heard about containers and containers with any EDAs and their applications. I think we talked a little bit about containers in the context of moving these workflows out to the cloud. But I think that it is also useful to think about how awesome it is that you can take the compute through containers and move it to wherever you need to move it. Like, if you are running these workflows internally. That can also help a great deal. I am sure it is helping a great deal with an EDA. I have been waiting for an opportunity to make a comment about the Hitchhiker's Guide to the Galaxy and Douglas Adams and talk about EDAs and the context of the world being designed by deep thought to ask the question, whose answer is 42. But I have not had that opportunity to drag the conversation down to my levels. I will just go ahead and pass at this point.
Zane Hamilton:
Thank you Dave. Brock?
Brock Taylor:
Well, EDAs, thanks for all the phish, Dave. I will just say to your question earlier about how is EDA HPC I think a great indicator is to look at ANSYS that I think is more traditionally coming from the manufacturing side and applications they have, they have applications that do EDA and Cadence over the past couple years. They have branched out, they have acquired companies for CFD So you see the crossovers where again, these massive conglomerates of ISVs are all accumulating apps across all of the HPC domains. I think you see more of that cross effort. It just says EDA is HPC, and HPC relies on EDA.
Zane Hamilton:
That is great. Thank you to our panel for joining. Thank you to the audience for joining us today. Hope you enjoyed it. Thank you, Joshua. It is good to see you. We will see you guys next week. Go ahead and like, and subscribe and we appreciate you joining. Thank you.