Tune in to this episode of Ask A CISO to learn:
- What is threat hunting?
- How to tell the difference between legitimate vs illegitimate access?
- Differences between Digital Forensics, Threat Hunting, and Incident Response
- The Threat Hunting and Incident Response process
- Is Windows still the most targeted system by threat actors?
- Is there really a shortage of cybersecurity talent?
- Should your organization have an organic security team or employ managed services for Threat Hunting and DFIR?
About The Guest: Harlan Carvey
Harlan Carvey is the Senior Incident Responder in R&D at Huntress, a company that provides endpoint protection, detection, and response capabilities to Small and Midsized Businesses (SMBs).
Prior to this, Harlan was the Cyber Defense Forensics and Triage Global Head at Ernst & Young in the United States.
Harlan is also an accomplished public speaker and prolific published author with nine books under his belt, including the first book of its kind regarding analysis of the Windows registry.
About The Host: Paul Hadjy
Paul Hadjy is co-founder and CEO of Horangi Cyber Security.
Paul leads a team of cybersecurity specialists who create software to solve challenging cybersecurity problems. Horangi brings world-class solutions to provide clients in the Asian market with the right, actionable data to make critical cybersecurity decisions.
Prior to Horangi, Paul worked at Palantir Technologies, where he was instrumental in expanding Palantir’s footprint in the Asia Pacific.
He worked across Singapore, Korea, and New Zealand to build Palantir's business in both the commercial and government space and grow its regional teams.
He has over a decade of experience and expertise in Anti-Money Laundering, Insider Threats, Cyber Security, Government, and Commercial Banking.
Hello, and welcome to another episode of the Ask A CISO podcast. My name is Jeremy Snyder. I'm the founder and CEO of FireTail. I'll be hosting today's episode.
We are delighted to be joined today by Harlan Carvey. Harlan is the Senior Incident Responder in R&D at Huntress, a company that provides endpoint protection detection and response capabilities to small and midsize businesses.
Prior to this Harlan was the Cyber Defense Forensics and Triage Global Head at Ernst and Young in the United States.
Harlan is also an accomplished public speaker and prolific published author with nine books under his belt, including the first book of its kind regarding analysis of the Windows registry. Boy, I certainly remember dealing with the Windows registry in my practitioner day.
So Harlan, thank you so much for taking the time to join us today.
Thanks for having me, Jeremy.
It's a real pleasure.
I wanted to focus on a couple things from your background that really jumped out to me as somebody who is a former practitioner and is now more on kind of the software vendor side.
And I guess, first to kind of frame the conversation for a lot of our listeners, what do you, does threat hunting actually mean?
Well, There's a couple different, I think different ways of looking at it. You know, there's a lot of folks in the industry that are sticklers for definitions. So they're looking for ...
like an academic definition of what threat hunting is. I've used, as a practitioner, I've used threat hunting basically when I've gone on site.
I spent four years at SecureWorks. 100% of the customers who called me are called for assistance and I was sent to assist. None of them had EDR capabilities.
So the first thing we had to do was really get a scope of what the threat actor had done. So we would collect some indicators, some basic information and then start pushing out the EDR technology that we were using at the time, which was called Red Cloak.
I don't think SecureWorks uses that any longer. It has been a while.
But from that perspective, threat hunting was going out looking for those indicators or other indicators of malicious activity. That doesn't always mean that the malicious activity that we're going to find, or that we do find is directly associated with that particular incident.
However, looking for known indicators or other indicators of potential malicious activity is generally the way that I've approached some of the threat hunting. And that's given infrastructures that you're unfamiliar with. As an incident responder for the past two decades. I've never gone into an infrastructure that I knew ahead of time. It was new every single time.
So I didn't have any insight into configuration or culture until I showed up, so...
And, and so
Sorry, go ahead.
Uh, it's okay.
No, I was just gonna say,
So I was just gonna say that sounds kind of like a, an initial reconnaissance exercise to build out a map of the network, maybe a, an inventory of the assets that are presented combined with some digital forensics activities on some of those assets.
Is that kind of a fair classification of the starting point?
Basically, less, it was less of developing an asset inventory because basically, you know, for instance, we went on-site once to an infrastructure that 150,000 endpoints across the globe, quite literally. You know, so United States-based, South America, Europe, and I think even some in Asia at the time, but a total of little over 150,000 endpoints.
So developing an asset inventory and network map wasn't something we're gonna really be able to do. The basic idea that we, the basic approach we took at the time was to really develop a scope under those circumstances. And that was basically determined, try to determine what systems the threat actor had touched, what systems they landed on.
I think the ultimate total we had was eight systems out of 150,000 with two that we actually identified that required a much closer look, but that's keeping in mind that the Red Cloak technology that we were using was different.
I think there's ... All EDR technologies have strengths and weaknesses depending on, you know, your perspective, depending on if you're actually using the full functionality or not.
Red Cloak was a little bit different in the sense that it would begin monitoring from the point of installation going forward, but it would also do a historical check.
We had several, we had a good number of rules that were written that would go out and collect historical information from the systems, you know, so we could collect information from the registry, from files from
Windows event logs, it was primarily Windows-based technology. So we could actually do historical look, which, which is not something you can necessarily do with most EDR tools today.
And ... do you find that that ratio of, you know, cuz that's eight machines out of 300,000 that had some kind of, I don't know if IOC is the right phrase, but some kind of signature or something that you identified and then, two, that you actually felt like you needed to really go deeper on.
That seems like a needle in a haystack.
Is that kind of typical for a threat hunting activity?
Well, the total number of systems was about half of what you quoted. It was 150,000.
Oh, okay. Sorry. I misunderstood.
Yeah, that's fine, but that's not really the issue.
The fact of the matter is, is up to that point, we had generally seen 1% or less of the systems that occur to actually actually landed on. You know, for instance, the number of nexus systems is, you know, generally small. Very often there's the initial system that they get, they gain access to.
And depending on the level of access and what type of access, whether it's just command line or if it's GUI-based access, they may not need to particularly go further than that.
Or they may shift to another system.
A lot of times lateral movement is simply getting a direct, or what people refer to as lateral movement, is simply getting a directory listing from other systems or doing searches or things like that.
So something like mounting a share off of a file server and being able to access files or being able to read directories off of that.
So not really, not really like moving to the system to the point where you're actually, commands are executing on that system CPU, but actually just getting a directory listing or searching files on that system, like you said, through a share of some kind.
Well, that's gotta make it a little bit tougher from a forensic standpoint to kind of attribute actions to, let's say, the primary user of that machine versus the threat actor who may have remote access of some kind.
How, how can you tell, which is, you know, which, let's say, which file access was done legitimately by Jeremy whose system is compromised versus by some threat actor who's resident on the system? I assume you've gotta go layers deeper and look at, I don't know, some heuristic analysis of the access or how, how do you think about that?
Oh, it's generally not that difficult.
So a lot of times it's very easy to tell.
I mean, look at timing.
You know, if Jeremy comes in like a normal, you know, say corporate user, 8:30 to 9, logs in, checks his email, maybe does a little bit of web browsing.
I know some folks outside the industry that are into sort of the marketing side of things, and from their perspective, like a real, I have a friend that's a realtor and she knows that the best time to post stuff is generally around Tuesday and Thursday mornings at 10 AM because that's when most people in corporate environments are off browsing the web looking for stuff.
So using this information and using the general understanding that you have of timing. So Jeremy, you know, comes in, logs in, in the morning, 8:30, 9 o'clock whatever the case may be. Maybe has meetings, breaks for lunch. Generally goes home.
And of course, things are very different with respect to remote work now and the work from home that we've been doing for a little over two and a half years, but to be quite honest, if you really pay close attention, a lot of people are pretty much following, you know, because of the work-life balance are pretty much following the same routines, you know?
So when you see a system that's got activity at like 10:30 at night, midnight, 2:00 AM, you know?
Or look at the files they're accessing. How often does, does Jeremy go in and run commands from the command line? Right?
A lot of times it's really, really easy to tell the difference between the legitimate use of Jerry's account and the illegitimate use of Jerry's account.
It's really not that hard.
Yeah, that makes sense.
And so from a, from the perspective of trying to help people who may be sticklers for those definitions understand, would you say there's like, I mean, it sounds like there's an overlap between kind of, you know, digital forensics, incident response, threat hunting, all of that kind of falls into, maybe broadly, the same bucket with maybe some minor distinctions between them?
Well, not really so much.
I fully agree with you that over the years there's been a merging of basic functionalities.
You know, when I started in the industry, it was, you know, go on site, you know, image everything. Completely fully image everything. And then from there, you know, basically back up a moving truck, oh, you've got 3000 servers. We're gonna image 3000 servers, you know?
And then, interestingly enough, we moved into this interesting idea of triaging systems. You know, I can remember being in a data center and having the administrator at the Dameware console and just having them open and close the CD-ROM tray of the server so I could find it. You know, you've got all the doors on the, you know,
Yeah. Rows of blinking lights, but you don't know... and, but no labels,
Right. No labels, and you're just looking for the systems, you know?
And then from there, you know, you could use a CD or a USB device. And you could collect information from the systems. So instead of imaging the entire system, we're collecting triage information, then we move into this idea of an enterprise-wide approach and we're pushing out EDR technology, you know, whether it's Red Cloak, Carbon Black, or whoever you've partnered with.
So you've got something you're pushing out. And then you're able to scope the incident and reduce the number of systems overall that you actually have to interact with. And even from there using either some using some sort of technology, or I guess even a batch file, if you need to, to do some sort of triage capability, collect information back that you're gonna later take back and do that quote unquote forensic analysis approach as part of your incident response.
And then once you start moving into the enterprise-wide approach, that's when we start kind of stepping over the line into the SOC.
So, back in the days when we had completely disparate functionality. I mean, I can remember going to organizations that had a SOC or a NOC, and I can remember going through the NOC and in one particular case, I heard a system dialing out to AOL if you could remember that sound.
So that was not a secure SOC per se.
No, not so much.
But the, the point is, is that we've had very siloed, very distinct functionality and definitions that have merged.
Nowadays, you have SOC analysts who go through their growth, their professional growth, and start out as strictly SOC analysts and maybe get into some reverse engineering, but then, because they're dealing with SIEMs and because they're dealing with EDR telemetry, they start moving into sort of that, digital forensics approach because you, you have to have some of that background, you know, and then even moving into incident response.
So you get a SOC analyst who progress through their growth track who eventually move into incident response, which is actually a less technical aspect when you really think about it.
Another thing that I've seen over the years is the separation of DF and IR, it used to be, it used to be a Texas Ranger approach. You know, somebody would call and say, Hey, I got an incident. I need somebody to come on-site. Somebody like me would be sent, or you might have another organization that would send several people, depending on the size and the requirements.
But they would go on site and they would perform the DFIR contract or perform the engagement. So part of that incident response, aside from the purely technical aspect, is I've gotta communicate with the customer. I've gotta say, I need access to this. I need to get these things. I need to better understand this. And then I need to take what information I have, I need to perform my technical analysis and then I need to communicate this back to the customer in either a verbal or written form.
In, in many cases, it was both. It's initially verbal and then some kind of written form. Well, those two functionalities, I've seen over at least the last eight to, five to eight years, have been separated. There's organizations out there, DFIR consulting firms, that have separated the digital forensic analysts from the incident response.
So you have these titles, like Incident Manager, Incident Commander, and they're not necessarily technical. Those individuals might spend more of their time interfacing between the forensic analysts and the customers in sort of a project manager kind of role.
Yeah. Got it.
So it's interesting because I mean, I think that communications aspect is really important cause I've gotta imagine that part of this whole process, when you are, let's say, talking to the customer who has been breached, obviously they're having a bad experience. They're, they're going through something difficult. You ... while probably not wanting to blame them in a way, but we know that so much of, let's say, things like business email compromises often initiated by phishing and so there probably was some human error somewhere along the line.
Well, first of all, do you try to kind of, let's say, conduct an interview process to guess where this incident started from, or do you typically rely on the technical indicators to gather that?
And then second, as part of the recovery effort, I gotta imagine that kind of communicating back to them about, hey, here's things to be aware of. Here's maybe some best practices that you should incorporate into how you're operating. That's gotta be part of it. Right?
Of course, of course.
So the first part, yeah, the first part of your question. Yeah, it's a combination. You want to collect as much information as you can prior to going on site. And then once you get there, very often, what I would do is go back through the information again, because there's a time lag, especially if you're flying.
You know, yes, there have been times where I got a call at, say, 5:30 on a Friday and was on a plane and in a data center by 11:30 that night in a completely different state. There have been instances like that, but you have to understand that, from the point that they actually decide to reach out and contact somebody, like you said, they're having a really, really bad day.
You know, probably one of the worst days of their lives. And what you need to understand is the information that they have at the moment is going to change. Their understanding of it. Not just in the incorporation of additional information, but as they're processing through.
So for instance, you and I are sitting here, you know, we're just, you know, we're just talking. You know, there's no stress, you know, there's nobody getting ready to beat you up in your office. There's nobody ready to beat me up in my office. So there's, there's not a lot of stress, so we're easily able to incorporate new information into our thought processes. Okay?
When somebody's going through an incident response, that's immensely difficult.
Yeah. Super stressful. Lots of pressure.
So that actually will happen over time. So taking the information you have and going, you know, once you get on site going through the information again, or verifying it is a great first step. Going back and forth between the technical information and the non-technical interviews is very important, but you also learn a lot of tricks along the way.
You know, I started out upon college graduation, I was commissioned in the Marine Corps as a communications officer. So I have a lot of experience providing technical services to, shall we say, non-technical individuals.
Yeah. If you wanna, if you wanna imagine people banging rocks together. Yeah. That's exactly it. Right?
So one, one of the things you learn through experience and also through engaging with others, one of the tips I learned is when somebody comes up to me and is extremely emphatic about something, whatever they're emphatic about is probably not the case. Okay.
So an example is I arrived on site for an engagement. I arrived at the building, went into the room. Didn't even have chance to introduce myself before an admin came up to me. He just walked right up to me and said, okay, here's the deal. We do not use communal admin accounts. He was very emphatic about it. Guess what I found out?
Yeah. Shared admin access.
Oh yeah, yeah. Yeah. There was one admin account called "admin". And the bad guy found out about it. Okay.
So generally speaking, when something, if somebody's extremely emphatic about something, in the back of your mind, guess what? Just, just keep that back there. So there's that, there's that technical aspect.
And generally to the second half of your question, generally, by the time you get to the end, and this is kind of the approach I take. As I'm developing, as I'm, as I'm going through the engagement and developing information, I'm interacting with my point of contact and the folks I'm working with. You know, if you kind of think about it from a military perspective, generally an instant responder is less of a, a Texas Ranger and more of a Green Beret if you really think about it, because you're going on-site as an individual or a small team.
And you're working with the local folks. You're working with the local IT staff or any security staff that they have, contractors that they have. So you're working with them developing, or trying to develop their trust. Not always successful, but trying to work with them and develop an understanding of where the data is, what went on, obviously like you mentioned, not pointing blame because generally what happens is you work through this process is people become to understand that. They understand, yes, this was phishing. Yes.
They understand that these things happened because as you're getting to the point of delivering that final report, you've already gone through this process where you've actually had access to data and you've shared things with them. You've shared individual things.
Like, for instance, I did one engagement where we found, and this ties back to what you mentioned earlier about determining threat actor access from legitimate administrator or user access or activities. We found where an administrator had gone into systems and had actually found one of the malicious service and partially remediated them. When I say partially, because they might have deleted the file, but not the registry entry or the other way around.
And so, as you're developing this information over time, the folks that you're working with are actually, in engaging with are actually developing an understanding of not just how this progressed, like, how it initiated, how it progressed, but other activities that went on along the way, like, oh yeah, without pointing it out or holding it and blaming anybody, you're actually able to show them, look, Hey, look, somebody saw something, tried to remediate it on their own, but didn't tell anybody else.
They didn't communicate it to anybody else. They just went over here and did this thing, but because even though they administer the system, so they don't really understand how a Windows system works, they partially remediated it. And yes, it's non-functional, you know, and it may not have lit off the antivirus or any other tools that we use, but there's something here that is an indication that something bad happened and somebody tried to fix it.
All of this is part of the communication along the way, because what you're trying to do is you're trying to help them get back to a normal operating environment, which is what they want, right?
So you're trying to get them there, but you're trying to help them understand, hey, look, this is the thing that originated this issue. And here's some other things that we could probably do better at the next time. So that's not a surprise ...
It's really interesting.
by the time you get a report.
Yeah. It's really interesting.
I mean, there's so many things firing in my head as you're talking through that and I can remember my days as a practitioner, when I ran, you know, starting a Windows NT 4 network with NT 4 workstations everywhere. And then, you know, up through Windows 2000 and AD and so on.
And it was a little bit of that before I really transitioned outta the practitioner roll around 2010, but we were very Windows-heavy in kind of all the companies that I worked at over time. You know, I know recently a lot of the corporations that I've worked for, the shift has been more towards Mac and Linux, even down to, you know, end user computing devices.
And I guess one question that comes to mind is, from your experience, do you think Windows is still kind of the target system of choice for threat actors?
Yeah. Why not?
Everybody, everybody that they're targeting has got it. And when you really think about it, don't think about the targets themselves. Also keep in mind folks like you and me. Practitioners.
So when you think about it, if I go after a Windows system, but I have the capability to move to Linux or Mac OS, what's that going to do with respect to the response? Is it gonna, if I move to a Mac and I'm comfortable doing it as a threat actor, are the local IT staff and the incident responders, whether they're organic or consultants, are they gonna be comfortable moving to that environment?
That's something you have to think about, you know, especially from a response perspective. It may not be directly in the individual's minds, that are the, you know, the threat actors. It may not be, you know, I can't speak for what they're thinking, but I can speak for what I see as an incident responder or managing an overall incident response.
And many, many times where there has been like a Mac system, you'll see a slowdown. People aren't responding as fast because they're not familiar. And I'm not saying this is always the case. It's just, just my aperture, my optics.
Yeah, they may not have the tools or the tactics to go, you know, investigate properly or to remediate properly.
Right. You did mention Linux. Well, what about Windows subsystem for Linux? I've seen environments recently where you would not think that that was something that was popular per se.
But if the organization has, say for instance, EDR telemetry, a very easy ways, just search for the execution of WSL dot exe, and I've been kind of surprised, you know? It's like, wait a minute, okay, so the individual endpoints are not like completely locked down. Right.
But there are people out there that whether for, you know, just say interest, like, hey, how does this work? Or, Hey, I found instructions on how to install this. Suddenly they have an Ubuntu system right there on their, you know, right there on their Windows system. And yeah. So what does that do to, what does that do to response? How does that impact a response?
Yeah, it's really an interesting point.
And I think yet that kind of speaks to, well, it speaks to one of the challenges that I know the broader cybersecurity industry has, which is, you know, lack of talent, right? Or shortage of talent. I shouldn't say lack of talent, I should say shortage.
What makes you say that?
Well, It's a good question. What does make me say that? And probably what makes me say that is, you know, the number of media articles that are out there saying, hey, there's X millions of open positions and we can't find people to fill them.
And where does that come from?
And ... great question. I don't really know. I've not done the deeper research into kind of the secondary sources.
I'm not trying to put you on the spot at all.
No, but it's a fair, it's a fair question to ask.
So what, what's kind of your, what's your take, I mean, where, where do you think the industry is and, and kind of where, what are you driving at here?
Well, a lot of this ... from, from my perspective and, you know, I could have a full-time job chasing after this stuff but from my perspective, limited as it may be, a lot of this reporting, "reporting", shortages of talent comes from circular reporting on a couple of surveys from a couple years ago, like five years ago.
So what they did is they went out, you know, and, and I, or I read 'em at the time they say, you know, they interview hiring managers so that it's a survey, so it's not, it's not technical data because one of the things they're doing is they're making a very broad set of assumptions when they're surveying the hiring managers like, okay, are you capable of writing a cogent job description?
And even just this morning, I saw somebody on social media say, hey, why are you putting entry-level positions that require CISSP when the CISSP requires five years of experience? That's not entry-level, you know?
And, you know, I spent half of 2020 unemployed because of various economic factors. And one of the things I ran into was there was an individual that had produced a particular technology. And he found a job description that said, you must have four years of experience in this technology. Well, he only released it 18 months prior. And so therefore, you know, even him applying for the job, he wasn't qualified, 'cause he didn't have four years of experience.
You know, and it's like, so the question becomes is, you know, when you look at this, when you look at the reporting, where does that report come from and, and where does that information come from? And then you trace it back. Well, it's a survey. And is it statis ... you know, are we gonna really get into the question of, is it a statistically relevant survey?
It's like, how do you know? And basically the people that ran the survey said, well, this is the data we've got. And then we don't have any other data. It's like, okay, I get that. Okay.
But it's a survey of individuals. And, you know, I got out of, I transitioned to outta the military in 1997 and something I've seen throughout my career is many of the people that I've interviewed with didn't know I was coming for an interview until I was dropped off at their cubicle. Okay.
So, a hiring manager shows up, they introduce themselves and they take me over to somebody's cubicle and they hand, they drop me off and they hand their resume. That's the first time that person has seen my resume. Now does that per ... is that person have any training how to conduct an interview? Well, very often not.
So when we talk about a shortage of talent, what are we really saying? And it's kind of an open-ended question. Yeah. It's kind of rhetorical , you know?
Yeah. Yeah. So, so great point.
I guess the point of where I was going was a little bit more kind of towards the end customer focus on this because one of the things I have heard from a number of, let's say, fellow CEOs is they do struggle to find people. They're having trouble competing and making offers or losing candidates to hire paid positions. And that's, you know, that's neither here nor there. That can happen. Whatever. Part of the question though becomes like, if you are starting a business or, let's say, your business is growing and you're either facing, or you feel like you're facing an increased or a stronger threat landscape where you are going to be potentially targeted.
Is it the right approach to think about bringing on a team or to think about a managed security service?
And how do you think about kind of advising customers on making that decision about, Hey, you should hire in-house or you should, you know, work with a third party?
I personally tend to think there's a lot of value in third-party service providers who do manage security offerings, because I think they have, (a) they have, you know, people who are real pros, hardened, you know, battle tested, have done this day to day and are really experts in their field.
And second, you know, they're really devoted to it. Whereas I know from my own time as a practitioner, a lot of companies, IT and security get kind of co-mingled and you're often pulled in one direction or the other. And you can often be kind of in the middle of that tension between we need productivity versus we need security.
And you might feel like you're making compromises on, let's say, secure posture of systems that you're standing up in the name of, yeah. but I have that contract developer who needs to be able to SSH in remotely to deploy code to it.
And there's always those kind of inherent tension. So I see a lot of value on that side, but I guess like overall, how do you advise customers to think about making that evaluation between, you know, in-house versus third-party?
Well, I kind of look at it through the aperture of my background.
So as an incident responder, I've shown up on sites where they have nobody that really understands security and that's not a hundred percent of the customers. I mean, some people have a general understanding of security. They just don't know the, they're not familiar with the specifics of what we call digital forensics and incident response, and that's fine.
You know, that's why I'm there. I do believe that there is value in expertise, but I also believe that, very much like the military, we need to have people close to the systems to be able to respond immediately, to be able to recognize that there's a problem and understand how to respond.
You know, I remember when I went through military training, we had first aid kits on our equipment. And I remember when I first checked into military training, I checked out my equipment from supply and when I turned it in six months later, I don't think I ever opened the first aid kit. I had no idea what was in there. I had no idea if it was just like a block of wax or, you know, somebody had stuffed it with tissues or whatever or candy bars. I had no idea.
I'd never opened it, but you know, later on, I began seeing that subsequent classes that went through the training actually took, especially after 2003, when you had a lot of trauma, explosive trauma, gunshot trauma, that kind of thing, there was much more focus on immediate response because when somebody gets shot or hurt in a war environment, in a military type of environment, you can't wait to get them assistance, and there's really no difference between what we're seeing within environments that are compromised.
So why not have somebody that can initially triage and understand what's going on and interpret it correctly? There's a difference between a scratch, you know, you can tear your sleeve on something that'll go through and cut your arm and it doesn't take much to square that away, but you can take care of that yourself.
You know, so if Jeremy gets hurt to that level, he can always patch it up and put a bandaid on it himself. It's no biggie.
But there's a level where you're gonna need, well, a little bit more care because of what happens. So you're gonna need what the military refers to as buddy care. Your buddy's gonna grab your first aid kit and he's gonna put a bandage on because you're not able to access it or he is gonna put a, a tourniquet on. That's something that we've seen especially since 2003, a great deal of training in, or, you know, if you are at a platoon level, you're gonna call a corpsman, but all this capability is organic.
So you're gonna have your local IT staff that has training. You might have some security-specific staff, or like you said, some that straddle between the two and then you're gonna have a small set of expertise. But the big guns, the trauma surgeons in, or either, you know, they may be retained for larger companies. They may be retained at the corporate offices and have that capability, but being able to respond immediately, detect and respond and understand what's going on.
The number of cases that I've seen over 20 years, 22 years of incident response where something, whether it was antivirus or more recently, some sort of EDR technology had actually been alerting, but nobody interpreted it correctly. Even if they, you know, I've worked for EDR companies that have notified customers through email. Is that, is there, is the person that, like, gets email, are they even there?
You know, is it a, is it a corporate? Is it like a communal inbox where nobody's checking it or is it an individual where that, you know, that receives the email where it goes to that account and that person is either on PTO or they left the company
Or change jobs.
Or change jobs within the company itself. You know?
So being able to identify, understand and respond to something locally is absolutely critical because to get that preservation of information.
Yeah, but that preservation of information, that's something that stuck with me throughout our conversation, because I remember back in my days, one of our very first kind of, you know, and I didn't work for large, large companies, kind of the top-sized company that I was at, you know, running IT or working in the IT and security teams was probably in the three, four hundreds of people and endpoints.
You know, the mindset that a lot of people had in those days was, okay, we run standard corporate builds. We buy pretty much standard corporate machines. If we think that there's a problem with a system, whether it's, you know, virus, compromise, poor performance, whatever, one of the very first steps was wipe the system, reload the corporate image from scratch right now that obviously I'm gonna be destroying evidence there that is useful in an incident response.
Do you still see that, you know, MO or do you still see that tactic being deployed?
Oh yeah, yeah, yeah.
It's because of, it's because of the business needs and like, you, you kind of alluded to earlier, you know, and, and I actually heard this term years and years ago, like 97, 98 ish. Security breaks stuff. People, people want to do things and somebody from security just shakes their head and that's not communication.
You know, I remember
Security's the team of no.
I remember going to a customer site and doing some consulting and the network engineers at the customer site were really upset because the security guy had said no SNMP. And he is like, well, we need the SNMP to manage the network. Not just servers. Not just systems, but devices.
You know, I worked with SNMP and developed code to work with SNMP data as part of my Master's thesis for my graduate program so I was kind of familiar with it. And I was like, well, how about if we just like, not, how about just blocking it from the internet?
Like not allow it in through the routers and firewalls. Would that be cool? And everybody was happy, you know, but it's like, why, why does security have to break stuff?
Now, look, there's stuff. There's security related stuff that has no business value whatsoever. Like, can you think of any business function where Windows needs to maintain credentials in memory and plain text?
Okay. It's like zero,
Zero. Okay. So guess what?
Yeah, let's say no to that. Okay. And if somebody needs to do it for some purpose, you know, maybe make it a business process to have justification for. Otherwise, if we ever see that happen, if we ever see a system changed to enable that functionality, guess what? It's kinda a bad thing.
So let's use means to recognize that. Let's use means to detect it, and let's respond to it. And, and when we say response, let's say whatever detection mechanism we use, let's have it automatically isolate the system. So instead of waiting for Jeremy to get over there and check on the system, maybe we just isolate it on the network because, you know, to be really honest, it's what we should probably do.
Now, there are some things that we generally think like in, in, uh we, we've seen recently from Microsoft, for example, where we generally think that something would be bad. It would not be a legitimate business function, you know, running macros and Office files downloaded from the internet. But guess what?
As Microsoft found out, they got a lot of complaints, but that doesn't mean that, that doesn't mean if an organization doesn't have that as a business function that they can't just dis, you know, make that a default setting. Hey, guess what anything downloaded from the internet, Office files, we're not running macros. Period. End of story.
We have no legitimate business reason for doing so, so let's just not do it. Let's set it up so you technically can't, and then if something like, and, and imagine how much, imagine the impact that it'll have. Now, it's the same thing is true with what we've seen is the reaction. Right?
What we've seen is the reaction from the bad guys is to go to archive files and ISO and IMG files and shortcuts.b Well guess what?
Right. But do you have a legitimate business reason to mount, automatically mount an ISO file?
Do we have that? Or do we have a legitimate business reason to automatically run anything that's, like, you plug in a USB device and just auto run it. We have a legitimate business reason for doing so. And if we do, let's take a look at that and maybe put that on an isolated system so that we know that we've got, like you said, 3000 systems in the, in the organization, all this is, uh, default settings.
We're not gonna enable this functionality. And oh, by the way, on the one system where we do need it, which is this system over here, which is identified by name and IP address and user, we're allowing that, but we're gonna monitor.
And we'll maybe like network segment it off.
Even better, you know?
Yeah. Yeah. That's a great set of, kind of, default recommendations for us. And unfortunately I think we're running out of time for today's episode, but Harlan, it has been a real pleasure chatting with you today and learning about threat hunting, learning about incident response. I think our audience will really enjoy this episode.
Thank you so much for taking the time to join us on the Ask A CISO podcast.