CIQ

Beyond the Breach: First-Hand Accounts of Hacker Attacks

March 2, 2023

Our Research Computing Roundtable will be discussing real-life hacker attacks. Our panelists bring a wealth of knowledge and are happy to answer your questions during the live stream.

Webinar Synopsis:

Speakers:

  • Zane Hamilton, VP of Solutions Engineering, CIQ

  • Gregory Kurtzer, CEO, CIQ

  • Jonathon Anderson, Solutions Architect, CIQ

  • Gary Jung, HPC General Manager, LBNL, UC Berkeley

  • Alan Sill, Managing Director, Texas Tech/NSF CAC


Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors.

Full Webinar Transcript:

Zane Hamilton:

Good morning, good afternoon, and good evening, wherever you are. Thank you for joining us for another CIQ webinar. My name is Zane Hamilton and I work for CIQ. At CIQ, we're focused on powering the next generation of software infrastructure, leveraging the capabilities of cloud, hyperscale and HPC. From research to the Enterprise, our customers rely on us for the ultimate Rocky Linux, Warewulf, Apptainer support escalation. We provide deep development capabilities and solutions all delivered in the collaborative spirit of open source. So this week we have a really interesting topic. We are going to be talking about hackers and hacking and experiences with hackers. So more of a storytelling time, so we could bring everyone in and let them start sharing their experiences. Welcome Gary, Greg. Hey, good to see everyone. So I'm a little excited and concerned that we actually have people who have experienced being hacked or at least having someone try attempts to hack. So I'm going to ask everyone, introduce…

Gregory Kutzer:

Wait, Zane, I thought this was a webinar to talk about us hacking into other systems. Wait, hold on. What?

Zane Hamilton:

Do you really want to talk about it?

Origins Of The Word Hacker [6:18]

Alan Sill:

That one? Can we do a rare thing and talk about the origins of the word hacker?

Zane Hamilton:

Sure.

Alan Sill:

I mean, people have various opinions about this, but I think most stories traced back to the needlessly clever tricks played by MIT undergraduates, on the school and each other. The most famous of which I think was the giant Mickey Mouse ears as they attached to the MIT dome which took a couple days to get off or the sign that replaced the terms of use black on the side of the compute lab with the, you must be this smart to use these computers. And they had a chart showing the average Harvard undergraduate average MIT undergraduates and so forth in an arrow, in the spirit of you have to be this tall to ride this.

Gary Jung:

Didn't they put a car on a roof too?

Alan Sill:

Yeah, stuff like that. So a hack was a show off exercise, and ideally no one got hurt. And you demonstrated some level of technical expertise. Again, just show off. So this evolved to exercises with the shared computing systems in which the goal was to take the control away from the operator for as long as possible without the operator detecting this. And this is how the term hacker entered the computing lexicon. So hacking, as Greg says, is not necessarily a bad thing. It's acquired a bad first definition. We still have this characteristic that even nefarious hacking, requires extreme levels of cleverness and to both carry out and I guess to the point of the webinar here to detect and avoid. I just wanted to do a level set there, because this gets skipped.

Zane Hamilton:

Thank you. and I do have some stories around that too, especially from, I feel like it used to be easier to do this stuff, especially in the university environment. I think it was, the dorms especially were a great place and a very ripe place for being able to play pranks on people in this space. And sometimes maybe that spilled over into lab environments and professors. I never personally was involved in any of that ever. I, of course, get on with stories, but, Gary, why don't you introduce yourself?

Introductions Of The Panel [9:07]

Gary Jung:

My name's Gary Jung. I'm the scientific computing group lead. I manage the institutional HPC for Berkeley Lab, and I also run the HPC program for UC Berkeley.

Zane Hamilton:

Thank you, Gary. Jonathan, welcome.

Jonathon Anderson:

Hey, Zane.

Zane Hamilton:

Taking notes already?

Jonathon Anderson:

What? No, I'm taking notes on what I want to say. I'm going to forget my stories.

Zane Hamilton:

Excellent.

Jonathon Anderson:

I'm with the solutions architect team here at CIQ, and I have a background in academic high performance computing.

Zane Hamilton:

Thank you. Alan, if you don't mind introducing yourself again.

Alan Sill:

Oh, sorry, I jumped right ahead. So, I’m Alan Sill, I'm from the High Performance Computing center at Texas Tech, and I am also a co-director of a multi-university industry cooperative research center in Cloud and Autonomic computing.

Zane Hamilton:

Thank you, Al. Greg.

Gregory Kutzer:

Hi, everybody. I am at CIQ, part of Rocky Linux, and I worked with Gary at Berkeley Lab at UC Berkeley for a very long time. And we had a lot of, well, a few things I'll say that, that I think we can talk about and share. Unfortunately, though, I can only stay for about 15 minutes or so. So, I'll talk fast when it's my turn.

Zane Hamilton:

Well, since you have to go and you only have 15 minutes, I guess we'll start with you. I was going to start with Gary, but now I'm, I'm going to switch gears now and just start with, start with you, Greg.

Gregory Kutzer:

Well, there's a couple stories. Gary, I hope I'm not stealing any of the stories to share. But there's a couple stories that really stood out for me, during my tenure at Berkeley and LBNL. The first one was one of the first big clusters that we made big firsts at the time. It was a system that we made. I believe it was for the chemistry group, but Gary can correct me if I'm misremembering that.

Gary Jung:

No, that's it.

Security At Berkeley Lab With One Time Passwords [11:05]

Gregory Kutzer:

We built up a system. This was pre-InfiniBand, just to set the tone of how it went. We built up the system and we had it running and it was going, I mean, everything was running great. There was no problem. We had a bunch of users running on the system and whatnot. Then all of a sudden we learned from the security team that it was hacked and it was taken off the network. Now, our security team at Berkeley Lab was quite advanced actually. And Berkeley Lab's infrastructure is such that pretty much every time you plug in a system to the ethernet, you're on the network. like the internet network, not just the network. Like every system has a live IP on the internet.

And so that creates a number of challenges for a national laboratory to remain secure given a situation like that. So we have a very good security group, and a very active security group at Berkeley Lab. They wrote software, originally, Verne Paxson wrote Bro, and now it's outside the lab as well. I'm running with a company. It's very advanced. It can monitor and spot an intrusion as it's happening and then communicate with the routers and the switches to actually take it off and drop it off the network automatically, the moment it happens. So we came to find out that our system was dropped off the network due to a security issue. And, it escalated fairly quickly to the point where the security team was talking with I'll just call it powers that be agencies about this system.

We couldn't even touch it. Don't touch the hard drives. Don't shut anything down. We want as close to a live snapshot of this system as we can get. I'm not going to disclose who was breaking into it and what they were doing. But this changed our security posture within our group because the mechanism that they used to get in was valid stolen credentials. And so through those valid stolen credentials people can come in and SSH into these systems and you never know from a monitoring perspective, it's not a hack. It's a legitimate SSH connection in many cases coming from legitimate systems. And it's very difficult to recognize that. So we did a couple things to change our security posture. The first one was we used one time password tokens.

And so everybody would get a prompt and it would be a bidirectionally get a prompt and you plug it in and it gives you your password. And that was incredibly useful for blocking hackers, because it added a couple layers to that onion that was very difficult for a hacker to just grab credentials from another system which they've already owned, and grab those credentials and then use those credentials, those live good credentials. So that actually kept us safe for quite a while other centers, HPC centers were getting hacked into. In some cases they were shutting down due to zero day vulnerabilities and didn't want to let any users in. So the whole system's offline for a little while, while they're trying to fix a zero day.

We actually kept our systems up because of just using those one-time passwords. And so I don't want to say we got cocky, but we definitely got comfortable with the security posture and how we were doing things until we weren't.

Security With TTY Injections [14:53]

Gregory Kurtzer:

And this was the second hack that I'm going to talk about because a lot of times when people start thinking out of the box, so legitimate good guy system administrators are not usually thinking of how to break into systems, right? We're usually thinking about how to protect them, but we don't usually, at least I didn't, I never really put myself in the mindset of a hacker really trying to get in with an agenda. And, one of our systems got broken into, and they were using the one time passwords. So the question now is, how the heck did they get in?

And we started, and this was the, one of the funnest things I remember doing with regards to security in the group was all of a sudden we were asked, okay, well how would you break into this system? Let's see if we can. Let's see if we can identify methods to break into the system. I came up with two. One of which was me and a couple other people. And the other one I came up with the one that was jointly came up with this notion of TTY injection and TTY injection, is basically imagine that you're sitting at your system in an SSH prompt and you're logged into maybe three different systems, right? You log into one system that you bounce from there to another system that you bounce from there to another system.

Imagine if somebody could type on your keyboard without you seeing, that's what TTY injection basically is. So they get access to any of those systems anywhere in that chain, and they have the ability to inject keystrokes into your TTY. Now, those keystrokes are semi-blind. They can't see what they're typing, they can't see the result of what the outcome of that was. So you can imagine somebody sitting there in a VI or in an EMAX session and all of a sudden a whole bunch of shell code pops up on their screen like, what just happened? What was that like? That's a possible outcome. But if they're sitting in an idle Bash prompt or a shell prompt that might actually do something. And that was one idea that we had. The other idea was using s SSH in a malicious way using sessions.

I don't know if anyone has ever heard of ever playing with sessions in SSH, but you can multiplex SSH sessions with a simple configuration change in the dot users dot SSH config file. And you could specify that you can have multiple sessions coming in from a Unix domain socket somewhere in the local file system. So then you can just use the SSH to connect to that local domain socket. And you're right back at the endpoint, like you didn't have to do any authorization at all OTP one time password or not. And so we figured that this was another potential avenue. So we're sitting on both of these and we couldn't identify which one it was, but we did everything we could to secure for both. Well, the latter is much easier to secure than the former, the former's very difficult to secure for because you can't really control it.

I can't control a TTY that somebody else's SSHing in from. So that was where we landed, we did the best we could to secure as much as we could. And then a forensics report came in on the system that we knew where it was coming from. And it turns out, and this was really cool, they did use TTY injection and we actually got the source code for that TTY injection because they left it there. And so I have it somewhere stashed away still to this day, the source code for the hack to do TTY injection. Cause I thought it was so cool. you can probably find it online now, at this point. But that one's very hard to protect against. And I hope I'm not giving anyone ideas here, but it's a very hard aspect to protect against.

The way that we decided to start protecting against it is to instrument our own SSH. We hooked our s SSH servers into Bro. So Bro, as an intrusion detection system can actually have visibility on what's going on within the SSH sessions, which normally, as you know, you can't see. And as a result, we were now able to catch other people doing TTY injection based on matching against scripts that are malicious looking. You can identify that via a number of different ways, but this was really the counter to doing something like TTY injection. But that's a hard one to solve. And I'm bringing this up here because I want people to know about this. I've been pushing this notion that people can break in through existing connections that you have and do things that you may not have thought was possible.

And I want all the system administrators out there to know that this is real. You can do that, people can do that. So you have to be thinking even a little bit out of the box on how you protect your systems, even beyond the standard ways that most people consider and think about how to protect our systems. Of course, always update and run your SELinux, oh, and HPC. How many of us disable SELinux? I think most people do. I don't anymore. I'll just leave it at that. SELinux can be very useful for also dissuading a number of these sorts of attacks. Containers can also be useful, but as in HPC, most users are not always running in containers.

Social Engineering Hacking Devices [20:42]

Zane Hamilton:

Greg, on the TTY injection thing, I think one of the things that I have seen before, and it was earlier two thousands, is a lot of social engineering. I know people who were able to janitor their way into data centers and start hooking up devices, USB powered devices to a machine that would sit between the keyboard and the operator. And they also had Bluetooth and wifi enabled on them. So you could actually sit outside the building and connect to this thing and watch what they were putting in, but you could also lock their keyboard and take over and start actually injecting whatever you wanted. So I think it used to be really scary. I think now there's ways around that you can, people are a little bit better about knowing what's going on with that, but it's still, when you talk about the social engineering side of this, it can get pretty interesting pretty fast with the devices that you can make at home.

RootKits Used For Hacking [21:30]

Gregory Kutzer:

So yeah, you're exactly right. There's one other thing. I'm sorry, Alan, I think I interrupted you. There's one other thing I wanted to mention real quick on this. Once we figured out that the system was hacked, that first system that I mentioned, we found what's called a Rootkit on there. Rootkits are really interesting because it's like living in the matrix, you can bend the rules of the universe in such a way that you do a PS and you don't see the processes running, you do an LS on the file, it checks on the files that everything looks funky. It's like not really there. And what was really funny at some point is, in working with the security team I grabbed some of the files off of that system and I tared them up, I compressed them, did a tar.gz and sent them over to my system and I emailed them to somebody else so they can see what those files were doing.

So for example, ESPN and NIT, I took the modified ESPN and NIT, which you can only see once you booted on a third medium or a backup copy or something. I took the modified ESPN and NIT and I tared up and with some other files and I emailed it to someone. So I get a notification back in my email saying that my attachment has a known virus in it, a software virus. Now Gary, I think you mentioned you thought it was a Windows virus. I think if I'm remembering correctly, I think it was a Linux virus.

Gary Jung:

Oh, it, I might be, I might have missed remembered.

Gregory Kutzer:

Linux viruses are incredibly rare, but this had a known Linux virus. So it was not only hacked, like the hacker also had a Linux virus when he compiled this binary and put it on this system. At least that's what we're assuming. But the ESPN, it actually had an ELF-Virus in it. And we found out about it because I emailed it to someone else. And our, I don't remember what antivirus we were using for Berkeley lab at the time, but that antivirus gave it some cred because it not only decompressed it and untared it then validated that there was a virus in there and then blocked me being able to send it.

Zane Hamilton:

Nice. Alan, you were going to say something?

Stolen Credits Used For Hacking [23:58]

Alan Sill:

I was going to say that I put it in the chat that the most frequent mode of compromise for SSH was SSH, just straight out stolen credentials. As in the area that Greg refers to, not almost all HPC centers and all the national labs and so forth have some form of two-factor authentication. Sometimes also a gateway or other intermediate layer that can throttle such attacks. I just want to point out that it's not all just bad people doing stuff to us. Letting your SSH keys be stolen, using them without a passphrase in the first place, those are things under your control. Now, people just routinely publish their security keys and their GitHub repositories and, so self-created modes of attack are still a huge part of the problem.

Zane Hamilton:

Thank you, Alan. Gary.

Products To Protect From Hackers [25:10]

Gary Jung:

I just was going to add a couple of things for people. The intrusion detection system that we have was originally called Bro. Now people can find it as Zeek online. And then, and there's a commercial company called Corelight, which actually has it all packaged up in the box so you can buy it. Now, it's used at a lot of other institutions since it's easy to access, but you use to have to be a wizard to be able to use it. At Berkeley Lab we do have our own hacked version to do what we call a reactive firewall as Greg had mentioned. It's a good product now. You could just buy it off the shelf.

Zane Hamilton:

Thank you.

Corelight For HPC And Cyber Security [25:58]

Gary Jung:

One story, about the Corelight in an HPC context is that a lot of institutions are connecting their clusters up to the Science DMZ, which is at least a hundred gigabit. Several years ago it was a problem trying to figure out how you are going to monitor it with an IDS. What is going to be fast enough to do that? It's still a difficult problem because now people are doing 400. And so, as a story, I could just say I was part of the conversation to help think about how you would break that down. So back in 2005 at Supercomputing, I invited two people that I knew, but they didn't know each other.

And that was, David Skinner from NERSC and Craig Leres from our cybersecurity group. We're just sitting there talking about HPC and Craig does the cybersecurity. And I just said, Hey why don't you, because one of the things they had always done to make the IDS faster was keep hacking on the kernel. They were using FreeBSD so they could make it as fast as they could. I said have you thought about using a cluster to analyze the data? And they're like, that's an interesting idea. Some of the early versions of Bro are now Zeek, which were able to do the IDS for ScienceDMZ at a hundred gigabit, where they used a version of that to break down the traffic so they could do it. That was my story about being there at the right place just to bring a couple of people together.

Zane Hamilton:

Thank you, Gary. All right, Jonathan, you were writing stuff down. I'm interested to hear if you had to write it down, how many stories are we talking about? Thank you Greg. It was good to see you.

Gregory Kutzer:

Sorry, I do have to drop. I'm going to watch the recording of this to get the rest of the conversation. And Gary, you should also talk about the Cuckoo’s Egg.

Gary Jung:

Oh yeah. Okay. That makes me sound old.

Gregory Kutzer:

Bye everybody.

Insufficient Security For HPC Users [28:31]

Jonathon Anderson:

My first story, the first thing that comes to mind is no surprise to anyone here because we were all there at the same time. It was at this moment in time where the current security practices among HPC users was discovered to be insufficient for some set of people that were interested in gaining access to those systems. There was a common habit among HPC research users that would just have an SSH key pair and you'd copy your private key up to the login node. Then it was very convenient to SSH around between different systems. But the thing people didn't really think about, because they weren't computing professionals and no one was really looking at it this way at least. I was a junior admin at the time when this was all happening.

So this is foggy and from a younger person's perspective, but, was that someone would get onto one system finally, compromise one account, and now not only do you have their credentials that are probably shared with other sites, but now because of the way SSH logs other systems that you've attached to through the known host file, you also have a list of all of the sites where that credential might be valid. If there's a local vulnerability, and you can use your entry onto that one system to escalate and see what's in other users' home directories and in their SSH known host files and they have private keys, it just spread really easily through HPC centers. It was my experience anyway, which was when the HPC community at large, at least the open HPC community in the open national lab space and universities and things like that really started deploying two factor.

Like Greg said, that locked all that down. What I think about when I reminisce on that story is just how important communication and community is in all of these things. Because it's always a battle of keeping up, with the state of the practice and intrusion against the state of the practice in monitoring and knowing what kinds of attacks are coming. I think to the very first thing that I observed at the university where I was studying that I could, consider a hack where there was in the computer science department, there was a departmental Solaris server actually that was just there and we could get accounts on. None of us had a lot of Unix experience, so it was a place to just do Unixy things, do programming, homework, that kind of thing.

One of my friends discovered that the NFS server on that system was just open in the clear with no root squash and mounted the NFS share and used that access from his desktop in his dorm, and used that access to change our professor's command prompt to something that emulated the appearance of a DOS prompt. And we all thought that was funny, and a goof on the professor. I don't know. I've got a couple other stories, but that's really what I'm thinking about is how it's always a community effort, both in protection, but also you're fighting against a community of people that are discovering things. And once that information is out there you have to react to it. You have to guard against it.

Zane Hamilton:

Absolutely. I spent a lot of time, especially the earlier part of my career, was all in dot coms. We saw a lot of attempts on those large dot coms for people getting in, people trying to do SQL injection attacks, all kinds of stuff. DDoS was very popular as well. We had a site that we were running, and it was a very specific tool for people who worked for the company to come and bid, but we're talking tens of thousands of people that would hit the system at the same time. And in the middle of a bid cycle, we actually got weird errors that started showing up as unauthorized authentication attempts. Very odd. It was not at the right layer, but  the logs were filling up very fast, and then all of our HDP sessions were running out and they would just hit the server and they would hang and they would never go anywhere else.

We assumed we were being DDoSed and we started setting up, sorry, servers and trying to send traffic, trying to block IP addresses that started having users call into us complaining that they were being sent somewhere else. So that made us wonder, and it took us several hours. I'm sure Joshua, if you're listening to this, you remember this several hours to figure out what was going on. We started looking into the app server, we got the vendor online, started digging in and all it was is we had a certificate that had expired inside the app server that was self-signed by the vendor and they forgot about it. And it was throwing a very bizarre error. We actually DDoSed ourselves with our user community thinking we were being hacked. We were looking at all the wrong places. We had a lot of people looking at things, and we were having a very interesting time concerned that we were being hacked. And it wasn't, it was just a vendor forgetting to update an A certificate.

Alan Sill:

Yeah, we called that a self denial of service. 

Zane Hamilton:

It was awesome. It was awesome. You have to have stories, Alan.

HPC Centers Routine Hackings [33:44]

Alan Sill:

Yeah. Well, so, Greg alluded to this era where, and and I think Jonathan mentioned also that, in HPC centers were essentially routinely getting hacked. Primarily, as I mentioned earlier, in the through SSH, and users losing track of their key SSH key pairs leaving them readable so that other people could get them and just not following basic sanitation and putting passphrases on them. This was why in the early days of the grid, we created probably the most universally disliked login mechanism around. It's now routine to encounter these things. But we introduced individual personal X.509 certificates that couldn't be used directly. So to use the grid, you had to have one of these, you had to get it by showing a government issued photo ID to someone at your institution who would then certify you, and then you'd get this certificate.

It's just like the certificate that turns your little lock icon on to your bank. Again, these are things that people are familiar with now, but this was 20 years ago and nobody wanted them. They just wanted to use your name and password, right? So we said, no, and by the way, you can't use this certificate directly. It can only be used by formal request, which requires a passphrase. We had control over that level of software. You couldn't get a grid proxy without entering a passphrase. And you get a proxy from a membership server. You had to be a member of some organization, some astronomy collaboration or hydrogen geophysics or biology. And, you get this thing called a grid proxy. It was an extended attribute X.509 certificate.

By the way, this is now the most popular framework for cloud-based jobs. It's called SPIFFE, and has an implementation layer called SPIRE. and it's, a product of the cloud native a project of the Cloud Native Computing Foundation. And it runs a hell of a lot of infrastructure. The idea then and now is the same as these things: you get a proxy, which can act on your behalf, but it's limited in various ways. So it's only limited in the grid to run on servers with which your sponsoring organization has an agreement to run only after it looks up your proxy in a local database to see that you're actually allowed to hold it. And it's a time-limited thing. Now, these days, I could probably hack this in about a half an hour because, these proxies once issued would have lifetimes of up to a day now you get things that are just evanescent.

They're only useful for a little while. And that's just one of many types of security that now goes on into clouds. The problem that we had with this was user adoption. People really didn't like anything they got in the way of that username and password. So, only now 20 years later are we finally starting to see the major players introduce tools that people will tolerate, which can in principle better allow us to get rid of the user password paradigm.

How To Guard Against Hacking In HPC [37:34]

I do have a couple resources I wanted to share. Just put in a link to the trustedci.org. It's now actually part of a family of funded NSF projects that are aimed at people who are providers of Zebra Infrastructure. And then, lets see if I can link to another resource. A couple years ago, there was a National Institute of Standard and Technology working group on high performance computing security, I think with some funding from the CHIPS and Science Act. It's been restarted. And they have an initial public draft at a link that I just put in the chat which is open until April 7th for public comment. It's a very thin document right now. It really could benefit from a substantial amount of input, but it looks to be a, and I'm not involved, I've just noticed it. It looks to be a serious attempt to start building a reference manual for HPC specific security. We know that this is a special thing because the usual symptoms of having been hacked, right? Your CPU's running all the time and well that's just success in HPC, right? So how do we guard against that?

Zane Hamilton:

Very true. Thank you, Alan. I have questions that will come back to Gary and Alan for sure. Probably Jonathan too. But, I know Gary, Greg asked you to tell the story

Early Days Of System Monitoring Fiction From Facts [39:07]

Gary Jung:

My story dates way back, but it's probably well known by, well at least older people. But the Cuckoo's Egg, for people who have read that book, is a book about a hacker who, at Berkeley Lab, and this happened in the mid eighties. I was actually there, so people mentioned the book, David Cleveland, who was a systems administrator. I worked with him, Paul Murray, Lloyd Bena. I knew these people. Cliff Stoll who wrote the book. It was a very interesting time. It was pre-internet, and the guy came in through a dialup called timenet. And it's just a story. It's nothing to learn out of this other than I'm just saying. Well, it was just an interesting time in the way we had to monitor the lines. We'd use these DEC LA120 teletypewriters, which actually printed on like fan fold paper.

We'd had boxes of paper and this is early monitoring. This is how we did early monitoring, was all on teletype and fan folds and boxes of paper. And that's how we monitored the system. Anyway, it's a very early time because a lot of things like security measures on systems were really not well known at the time. For example, a password cracker, that was when you were first introduced to password crackers. It wasn't like a simple thing that you just grab and download and run. I mean, guys show up with reels and nine track tapes of all the passwords on it, and then we'd have to essentially, be a whole thing to set up so that you can run on a password back then.

So it was just an interesting time. We originally detected the problem because we ran a CDC 7600 and a 6600 for the computational workload and a set of PDP 11/70's for the text processing. But we ran our own accounting system that was homegrown. And, that's where the discrepancy occurred. It is that the hacker did not know about our homegrown accounting system, and noticed that there were some differences between the Unix system and what we saw. My other interesting story. I worked with a couple people in that book, and then there was another book that came out later called Take-down. This was a later book and the person tracking the hacker was Tsutomu Shimomura, but one of the systems administrators who ran the ISP over at the Well in Sausalito. Pei Chen, I worked with her too for several years. So my story's just that I happen to work with people that were in these books.

Zane Hamilton:

Very nice. Thank you, Gary. All right, Jonathan. I know since you were writing before, you have to have more than what you did. Come on. Next story.

Password Crackers [42:23]

Jonathon Anderson:

So, I mean, Gary was just talking about password crackers and my next story involves a little bit of that. We were an HPC shop, but it was not unusual for us to have prototype and proof of concept systems lying around where we were trying to stand up a service. And then maybe we got busy and left it there. We had one such server, I don't even remember what it was supposed to be doing. It was probably like a perfSONAR box or something like that. And, it had been, perfSONAR is a network monitoring, performance analysis tool. It had been sitting out there for a while and then we got a notification from our IT department, upstream from us, that the port that that server was connected to had been shut down because it was sending spam.

And we, of course, knew that we were not sending spam, so something bad had happened. We take that system offline, we're like, yeah, that's fine, just unplug it from the network and we'll log into it at the console and we'll investigate what happened. Because we wanted to understand where we went wrong. And, we look at the log and it's the worst case scenario where you see hundreds or thousands of failed attempts and then a success against the root account, which is not a good thing. Clearly someone had the credentials for it. It wasn't just getting in through a vulnerability. I took the shadow file off that system. I was like, we are going to figure out what credential that account was using, because if it's shared with other systems, it's compromised there too.

I didn't know much about password cracking at the time. I found this utility that I think most people will have heard of, but I hadn't used before called Jack the Ripper. And I just set it on that hash, and it took it all of a second to come back and tell us that the password for that system was install, which was the single word password that our default provisioning system was setting, intended to be changed to something else with configuration management. But since this was a prototype system that had never reached the stage of configuration management had been left with this terrible one word password for a long time. And this just spoke to us. The one thing that I hear people say is nothing is more permanent than a temporary solution. And that was the situation here where something was being used, but it was never considered production. So the server just hung out and was working and no one did anything with it. And, we needed to have good password practices earlier in our process than we thought we did. Even our prototypes needed to have a good security posture.

Zane Hamilton:

So it's interesting you say Jack the Ripper. I ran across that one in school, in college, mainly when you start looking at how people attach their Windows systems to the dorm or any other school network, back then it was, it seemed like by default everything was a Windows share. You could get on everybody's system without any passwords. It was very easy to pull down the password file from Windows, run Jack Ripper and have everybody's credentials, do everything they do, which led to other things where you could put stuff in their shared or their start programs whenever they rebooted, then you suddenly had full control of everything. I'm assuming a lot of that has gone away, but I mean, is that stuff that you guys still see people doing? Very simple. I mean, there are dumb things that get done plugged into networks, especially, I mean, again, university, a lot of people coming in  they just know they're machine at home. They don't know a lot else, especially around security that just leaves stuff wide open for everybody on the network to look at. Do you still see that?

Security On Shared Networks [46:08]

Alan Sill:

Well, yeah, I mean, I think, again, it goes back to user behavior. I was at a meeting yesterday where Glenn Lockwood is giving a talk about storage. He is now at Microsoft after many years in and out of startups and the national lab systems. He says the most common thing they see on Azure these days is, as a vector, is that people just set the permissions wide open. Now, he also went out of his way to point out the degree of containment that cloud vendors, including Microsoft, put around their systems. So hopefully having chmod-777 doesn't have the same effect if you're in some little cloud universe that nobody knows about security by obscurity.

But yeah, again, these user behaviors are the one things that I mentioned publishing keys and GitHub repositories, and now of course there's Dependabot and there's various, automated ways of perusing  your CI/CD infrastructure to make sure that you're not doing these things. Someone earlier in the chat mentioned sharing of credentials. This of course still goes on. You give people allocations and quotas and PhD supervisors give their passwords to their students, and how do you stop that kind of thing? Well, of course we do stop it when we find it, but, I guess people sort of need to know. I think the thrust of your question is are there easy things to do that people could see these steps to take, that people should take right away sort of immediately before the end of this webinar? I'm just going to echo all the normal advice, but reusing passwords is bad. Don't do it. Put pass phrases on your SSH key. So, just brush your teeth. What am I supposed to say? 

Jonathon Anderson:

Use a password manager.

Alan Sill:

Oh, that's an interesting topic. since two, well, three of the major password manufacturers in the past year of password keepers have been breached. Last pass had yet another breach announced just last week from an employee that had all the secrets on a home machine, which got hacked. So yeah, I'm not sure I can recommend password managers.

Jonathon Anderson:

Oh, that's why I self-host my password managers.

Zane Hamilton:

I was going to say, yeah, online password manager managers make me very nervous as well. It's just handing somebody else the keys. I have the one that I prefer. It stays on my machine, on my NAS, at home encrypted multiple times.

Alan Sill:

So things I tell my staff each device they use must have a separate private key that never leaves that device. If you need to access the systems from multiple devices, then enter multiple keys in our centralized repository. And, then if you lose your phone or iPad or Nintendo Switch or wherever else you've stored the damn thing, then we can drop just that one and not lock you out completely, and take away your birthday. Well, we still might take away your birthday, but maybe. Yeah, or we can do that. Right? Yeah, I think that there are simple things that people can do and should do, and most will just expose themselves to a little bit of thought.

Use Of A Password Manager [50:08]

Zane Hamilton:

Yeah. One of the things that we always like to do from a consultative background was as soon as we installed the system, we would go into the SSH server and change the configuration so that you could only try to attempt to log in three times and it would lock you out for an hour. We were pretty picky. Not five minutes, but a full hour. You were out for an hour, that's your problem, but making sure you couldn't log in is rude. Making sure there was a password criteria that was 12 plus characters at the time. Now what is 18 is the recommended number. I can't remember 18 characters consecutively, so I have to use a password manager that would just, I would lose them all the time. So simple stuff like that.

Alan Sill:

Yeah. Gary, I wanted to see if you've seen this problem. You're at a university with multiple campuses, right? We have a health sciences center just across the freeway separated from us by what an architect on our faculty once referred to disgustedly, as adequate parking. I can't quite capture the level of disdain he had for adequate parking and how that had accidentally separated the campuses. We have had researchers trying to use our HPC system. And it took me three months to get to the point where someone from our health sciences center could get through the various separate networks that separate the institutions and an external person I could have gotten in an account with an external user ID and then logged onto our system within a day. But the two active directory systems were fighting with each other and they're separated domains and we couldn't make the connections.

Gary Jung:

Boy, we have not had that. At least, I have not had to deal with that. So we did put together a secure enclave for UC Berkeley, so we could do sensitive data, so we could do PII and even HIPAA data. The people who are doing health sciences could utilize that platform, but they are all on the same, using the same IDM system. So, it sounds interesting. I don't know what to say.

Alan Sill:

Yeah, it was. I'm trying to avoid using Dilbert phrases these days, but, but the whole Department of Information Prevention came up.

Gary Jung:

Yeah, yeah. 

Zane Hamilton:

Thank you, Gary. Jonathan, you have "Drop Table Participants" in your title.

Credential Discovery [53:05]

Jonathon Anderson:

I think Alan mentioned SQL injections, and I was reminded of the classic xkcd Bobby Drop Table's frame and felt like I needed to homage to it. I do have one more credentials discovery thing. That's a fun story. I don't know if you guys have ever encountered an Excel spreadsheet that has a password on it and it does do some little bit of encryption inside of the Excel format, but then it clearly has the credential in it, or a hash of it in the file. So, I had a spreadsheet from a vendor that I really wanted to edit, but it was protected. And so I started digging into how the encryption works and how the passphrase is stored. Or how the password is found, I think it was something like an eight bit value and decided just trying that entire space. I don't need to know what the actual password was, I just need to know something that produces the same hashes, that eight bit value and wrote up a little bit of Python. And, and that also took very little time to discover. I don't know what the password was on that spreadsheet, but I sure did find one that was valid for it.

Zane Hamilton:

I think they have updated it, but the encryption is still not very good. Yeah, it's better than eight bit, but yes, it used to be terribly eight bit. Very easy to get into those. 

Well guys, I appreciate your time and if you have anything else you'd like to talk about or any other stories, we'd love to hear them. If you have any questions or comments from the people watching, we'd love to hear your stories as well.

Jonathon Anderson:

I should probably correct at least one thing that I said wrong. I've been told by a friend here on the back channel that it was in fact, John the Ripper, not Jack the Ripper. That is the password cracking.

Hacking Tools [55:02]

Zane Hamilton:

For some reason, I was thinking LoJack, but that was a completely different hacking tool that I think it went away a long time ago. There were several really fun ones. NetBus was fantastic for playing pranks on people at work. I don't know if you guys ever played the NetBus. It was in the late ‘90’s early 2000's. You could open and close people's CD ROMs, you could flip their screen upside down, lock their keyboard, lock their mouse, play music, and turn the volume all the way up. And they couldn't do anything about it except unplug the machine. It was a great tool, but it was one of those ones, it's very simple. If you were on the network and Windows by the default shares everything, you could easily put the executable in their restart script and walk by into the power button. You have full control of their machine an hour later. It was fantastic. So, very good. Alan?

Alan Sill:

Well, hopefully this is helpful to someone. I do encourage people to go back and find that link in the chat on the NIS document and contribute. It's only open for public comment for a few weeks and better to have your input in ahead of time rather than just view the whole thing with this dismay when it comes down the pipe.

Zane Hamilton:

Absolutely. You did send that over, right? Alan, did I see it pop up?

Alan Sill:

I put it in the regular chat. I'll put it in the private chat and you can include it in the notes. 

Zane Hamilton:

Excellent. Thank you very much. We do have a comment from Dave. Is using modem AT commands to call in votes for what Monday night Football games should be televised count as hacking? Maybe, maybe not.

Alan Sill:

We did talk a lot about privacy, but, oh, I have noticed. Well, some of my earliest memories of the DECnet as a graduate student was that fellow graduate students would complain that their advisors were using it to check to see that they were working late at night. 

Zane Hamilton:

It's valid.

Alan Sill:

I've seen a change in user behaviors. There is a change, somehow in the general culture around security. Like most clusters, we had an open file permissions by default except for sensitive files like the SSH one. This was just part of the culture of running an HPC cluster for years. And about, I don't know, four or five years ago, we started noticing people getting their new accounts and sending a security warning that somebody else could see my area. And it took us a while to get used to the fact that people have been educated to think that their little section of cyberspace is for their eyes only. So, we actually had to change our default account setups. And then of course now we get all the complaints that I can't share my files with my advisor and so forth. But that's a different story.

Creation Of Formal Security Groups [57:57]

Zane Hamilton:

I have a question for you on that, Alan and Gary, from your perspective and your environments, when did you actually start seeing formal security groups created? Because, from my perspective, it was early 2000's you might run across a company that had one you might run across one that most of them did not. And a lot of times if you had one, it was the physical security of the building that was also being tasked, the security of IT, which was not what they were good at. So, they had to develop that over time. And it was left to the admins to figure that out. But when did you guys start seeing actual security groups created?

Alan Sill:

I mentioned the grid and one of my first jobs when I started the project here was to, I was still in physics at the time, and I put on some nice clothes and made a PowerPoint presentation and just tried to bury them as deep in x.509 jargon as I could. And they sat there with their wise eyes wide open. I'm not exaggerating and said, this all sounds good to us. So that certainly wouldn't fly these days. And I mentioned that grid security then would not pass muster now. But there was already a security group 20 years ago for me to go talk to. I think it was usually in the telecommunications or networking group. They just sort of inherited this task by default.

Zane Hamilton:

Interesting. Gary, what did you see or what do you remember?

Gary Jung:

You know, we used to have something, we used to call it a computer protection program manager. And security was mostly handled by the systems administrators. We had a network advisory group through the '90's. At some point there is a position called the CPPM, as I had mentioned for a computer protection program manager. That was a part-time position until around the early 2000's. I think then with the expansion of the internet that became a full-time job and a whole security group. It was just interesting to see the progression. It just had to invest more and more of it so that people took this really seriously.

Zane Hamilton:

I'm always fascinated when I go into companies that have large security groups, and there's one in particular, a very large retailer that you walk in and it's like walking into a bunker. First of all, it is underground. And you walk in the room and they announce to everyone that they're no longer green, they're reds, and they change the color of the lights in the room. The boards go black. You can't, like if you're not a part of that team, you don't get to see anything. So when you get to go in, it's a special event, but it's very different from what I remember early on. So it's very cool. So, well if we don't have anything else guys, I don't see anything else new in the comments. I really appreciate your time. Thanks for coming and sharing your stories. Really appreciate it. Looking forward to next week, so like and subscribe and we will see you guys later. Thanks Gary. Thanks Alan. Thanks Jonathan.