In this episode of Modern Cyber, Jeremy is joined by Gemma Moore, Director of Cyberis, to dive into the world of red teaming and penetration testing. Gemma, an award-winning ethical hacker, explains the key differences between the two approaches and how organizations can use them to strengthen their security posture.
In this episode of Modern Cyber, Jeremy is joined by Gemma Moore, Director of Cyberis, to dive into the world of red teaming and penetration testing. Gemma, an award-winning ethical hacker, explains the key differences between the two approaches and how organizations can use them to strengthen their security posture. They discuss the challenges of testing modern cloud-based environments, the ethical considerations of social engineering, and the importance of fostering collaboration between red and blue teams. Tune in for insights into how companies can identify and mitigate real attack pathways before adversaries do.
About Gemma Moore
Gemma Moore is a highly-experienced Red Teamer, Penetration Tester, and Technical Security Consultant. Her expertise lies in network and web application penetration testing, with an emphasis on adversary simulation and simulated attack strategies. She is a founding director of the information security consultancy, Cyberis.
Cyberis Blog: cyberis.com/blog
Gemma's LinkedIn: https://www.linkedin.com/in/gemma-moore-9839921/
Penetration Testing: A guide for business and IT managers : https://shop.bcs.org/store/221/detail/workgroup?id=3-221-9781780174082
Alright. Welcome back to another episode of Modern Cyber. As usual, I am your host, Jeremy, and I'm delighted today to be joined by Gemma Moore, director of Cyberis. Gemma is a highly experienced red teamer penetration tester and technical security consultant. Her expertise lies in network and web application penetration testing with an emphasis on adversary simulation and simulated attack strategies.
She's the founding director of the information security consultancy. She's been in this career for over twenty years. It looks like at this point or maybe nearly, excuse me if I get that right, she holds CREST certifications in infrastructure applications and simulated attack. She was a contributing author to the BCS Penetration Testing, a Guide for Business and IT Managers. We'll have that book linked from the show notes.
And Jen was named best ethical hacker in the 2018 secondurity series Unsung Heroes Industry Awards as an and has been honored by SC Magazine as one of its 50 most influential women in cybersecurity and by IT Security Guru Magazine as one of its most inspiring women in cyber. Gemma regularly contributes articles to the Cyberist blog, which we will also have linked from the show notes as well as her LinkedIn. Gemma, thank you so much for taking the time to join us today on Modern Cyber. Thanks for having me, Jeremy. Jeremy.
That was a that was a long list. It feels too long. Twenty years is a long time. It it is. And, you know, I I think many of our guests on the show have similarly long lists, which I'm really pleased to have them join us on the show because I think we can learn so much from everybody in their experience.
And I think the domain of cybersecurity is so broad that there is enough room to hear from so many voices, including from yours because I think one of the things that, obviously, you've got a deep experience and specialty in is penetration testing and and red teaming. And I guess just to set the stage for today's conversation, in a nutshell, call it the sixty second description, how How would you describe red teaming and penetration testing? So penetration testing, you've got your your scope of work, which is a system, an application, a network. And your job when you're doing a penetration test is to test that system network application and find all the technical vulnerabilities you can, find out what's wrong, find yeah. Give recommendations on how to fix it, and basically find the problems before an adversary does so that you can fix things.
Red team is a bit different. So penetration testing is, focused on technology and Okay. Vulnerabilities in technology. Red teaming tests the whole organization, and that means you're going against the live organization as it's operating. You're working against, people process and technology.
You're looking at how you know various systems that you might have tested in isolation work together. Most importantly probably you're also testing whether you can detect an intrusion in progress and whether you can respond to it. So it's a lot, broader, if you like, than a penetration test. It's a lot, less focused on technology and more focused on finding an attack pathway that leads you from an initial point of compromise all the way to achieving some objectives like stealing some data or deploying ransomware at the end. So Okay.
They're sort of cousins, I would say, in a way, penetration testing and red teaming, but there's, you know, different reasons for doing each one. Would you consider, like, red teaming almost a superset and penetration is kind of maybe one one arrow in that quiver, so to speak, where, like, a penetration test might be part of the scope of a red teaming exercise, or do you really think of them as very kind of, like, related but more cousins? I think they're very separate things because your motivation for doing your reasons for doing them are completely different. So if you're building a new system, you want to do a penetration test to check that you haven't got any vulnerabilities in there that someone is going to exploit, and they might not be very serious in isolation. But you want to make sure that you've got all of the things that you might be able to find ticked off and fixed before you release, for example, your application.
And, normally, it's part of a compliance process. So you've got, you know, your 27,001 compliance process or whatever your risk management framework is and part of your risk management is that you will do a penetration test of anything that you release before you release it so that you know you've got at least a baseline of assurance that something that you've developed or released is safe. Red teaming is a little bit different because you've got that emphasis on, sort of detection and response, and because you are looking for an attack pathway, you're not trying to get full coverage. So in a penetration test, you're trying to get you're trying to find all the things that are wrong with the system. With a red team, you're just trying to find the pathway, the red team is.
Okay. The pathway from, you know, get your initial foothold all the way to steal the customer database or whatever it is in the most expedient way possible without getting caught and, you know, without being detected. Interesting because you learn an awful lot from red team that you wouldn't learn from penetration testing about how things fit together and how processes work and where little visibility gaps are. So it's a really interesting process but they do very interesting process, but they do very different things for you as an organization. Yeah.
Yeah. I I really like that explanation, and that makes a ton of sense to me. And it's almost like looking at one thing in isolation for all of its problems and then looking at kind of the holistic organization for, like you said, kind of this pathway through. And I think in in cybersecurity and and especially cloud security, which is the domain that I came from primarily, you know, most organizations, when they start to look at their cloud security posture, they end up finding that they have hundreds, if not thousands, or even tens of thousands. I I've actually seen that in some customer environments of misconfigurations.
But one of the challenges for them is that around a lot of these misconfigurations, there's really almost no impact because they're minor misconfigurations kind of, let's call them at the periphery or at the end of a, let's say, network navigation path or something like that where they don't have any impact. But to the point in cloud security, one of the real challenges is figuring out, okay, you've got these crown jewel datas. Maybe that's a database, maybe that's a set of files, maybe that's some asset or whatnot. And what you're really looking for is the attack path from the outside through the cloud infrastructure, applications, identity, network, etcetera, to the data, and that's really the thing that you care about the most. So it sounds like, you know, red teaming will help you identify that, where penetration testing might help you identify at the edge of that, what are all the things wrong with that application entry point.
Yeah. And that's exactly right. Because you'll get you'll you'll do an a penetration test with cloud services often ends up being a bit of an audit on config because that's how the world is now. And you will end up with a massive list of these are all the things that are not in line with best practice that you should probably think about fixing. Whereas in a red team where you've actually got an exploitable attack path that uses these vulnerabilities, it can really help you prioritize actually what is the biggest problem here.
Is it, you know, that we need to sort of start templating our cloud configurations, for example, rather than letting them roll out. Is that the way we fix this scattergun problem, or is it, you know, that we've got a problem in the network segmentation? Is that the thing we need to focus on? And there's all sorts of, you know, ways that it can help you prioritize because it helps highlight exactly where you've got blind spots and where you've got big exposures that, you know, maybe one team is responsible for one silo in the cloud and one team's responsible for another. And nobody thinks they're responsible for the bit that joins them together, and that might be the problem.
And so it can it can show you all sorts of things you didn't know before. Well well, along those lines, I mean, one of the things that comes to my mind is that you're going through that exercise. You're kind of examining the organization for what it is and and for where it's where it's flaws are. And it kinda strikes me that as you do that and you start exposing these flaws, people might not be too happy with what comes out of your work. So I'm just kinda wondering, you know, there's that old book, you know, how to win win friends and influence people.
It sounds like this is a good way to lose friends and alienate people. How do you do that? How do you manage that process without kind of upsetting the organization? It's a challenge because, when we're talking about red teaming, we're talking about, looking at prices and looking at people as well as the technology. Now when it's the technology that's wrong, you've that's sort of that's not emotional because the technology is the technology, though.
Oh, there's a computer bug. There's a Okay. Well, if you patch it, fine. But when it's do you know what? You haven't thought about this process properly.
You've got problems there. You know, someone has sat down and engineered that process and they may be precious about it they may be upset about it and and there are sort of even with penetration testing if you come across vulnerabilities where somebody has made a conscious decision to put something in place that is vulnerable you can get resistance to that but there's other ways that you can upset people when you're running red teams that you might not necessarily immediately think of and actually managing that processes is part of the reason why running a red team can be quite difficult but also very rewarding. So when I said, you know the definition of red team detection response is a really big part of that and it's not that unusual for us to be able to get our foothold go through an grab whatever it was we were after, and get out without substantial parts of what we've done going completely undetected. That's not an unusual situation. And if you put yourself in the in the shoes of what we call the blue team, the response teams then, that can be quite a challenging thing for them to take on board because quite often they've got the feeling that they've failed or that they are being unfairly tested.
And these types of cultural things come up, and they actually don't make you, much better at what you're doing. But the key to sort of, keeping the blue team on side is making sure that you turn any red team you do into an exercise that is productive for them and that means at least afterwards so the the red teams normally covert. Sometimes we involve the blue team, we call that a purple team. It gets very confusing with the color spectrum but, you know, when we're doing when we covert the really important thing to do after you've done your your red team bit is to make sure you have a proper debrief with the blue team because the blue team can learn a lot from a red team just like the red team can learn a lot from a blue team. You don't get good at playing football on your own you need someone to play against.
So it's really important that the blue team understand that we are there to help improve their ability to respond. That doesn't mean anything they've done wrong. It's an opportunity offer for the blue team to actually make points that they've probably been making to their managers the whole time. You know, the blue team probably know what they can't see. They probably know which controls don't work.
They probably know that there are bits of the network that they are, you know, blind in or cloud controls they've got no logging for that they want logging for, but, you know, it probably costs them money per log entry and nobody's willing to take things like this are the challenges that the blue team have. Yeah. And often, the results of a red team where we can prove that we were able to, you know, steal the crown jewels because the blue team couldn't stop us, because they couldn't see what we were doing because they've got no control across that bit of the network or that cloud portion or something like that. That's a really good business case for the blue team to go up to management and say, this is what's gonna happen if an adversary gets in. This is the budget we need to prevent it happening.
These are the controls we need you to buy so that we can respond effectively because we now know that if we do not, you know, do these things that probably the blue team already know and want to do, you can't do it. But it's a really good basis for a business case, which is helpful. Well well, that that brings me to exactly my next question that I was thinking about, which is okay. So you've done, I don't know, a gazillion of these, let's just say. Right?
But but, you know, many, many. Right? And so along those lines, what's the best practice for kind of presenting the results of what you, you know, what you found in a productive manner that is not going to enrage people, upset people, etcetera? Because I I I totally hear what you're saying. You know, Likely, the blue team kinda knows we've got some deficiencies on, I don't know, logging, segmentation, whatever the case may be.
They've been yelling about it, but there's always as there always is in cybersecurity, there's always that kind of, like, cost benefit trade off question and the prioritization question. So, like, how do you what what have you seen as kind of thematic? Because I know it's gonna be different, and it depends from organization to organization. But what are, like, some common themes in presenting this effectively? So cultures, as you say, as you suggest, they're really really important for this.
And in some cases, you'll have a culture where, you know, what's when faced with this type of result, you know, people understand that there are financial impacts. And sometimes you've got a situation where upper management actually doesn't necessarily see the financial impact. Now I'm the role that I take on these days in red teams is is the red team manager, if you like. And my job is effectively to be that translation layer between what's happening technically and sort of senior management and the control group, within the organization that we're working with. Part of the, sort of most satisfying part of my job is taking what the executive or the board looks like a load of technical gibberish because it's, you know, this vulnerability, this vulnerability, this vulnerability.
And even think even so through to things like, you know, if you're in an on premise environment, you know, domain admin. That means nothing to a member of the board. Like, if you're technical, you know what it means to have domain admin in a Windows network. Right? You do what you want.
But if you are an exec, that means absolutely nothing. And part of the job is, so what? So, you know, you say you can do this, that, and the other. The job that we have to make this effective, to actually make sure we get something actionable out of it is to translate from this huge pile of technical findings and individual parts of the attack pathways to an executive audience. What are the things you're going to worry about?
How much money is it likely to cost you if this goes wrong and you haven't fixed it? And, you know, what are the quick wins, and what are the things you need to plan strategically? And it's setting out those that information in a way that they can understand because it's in terms of, you know, compliance failures, you know, which Yeah. Who are you gonna have to report to when this customer database gets lost, and what's that gonna mean in terms of PR and press and then things like that. And, you know, what's it gonna cost you But when you have an office curious.
Right? Along the lines of that so what, have you seen because one of the things that came to my mind as you were saying that and you said so what? And I was like, okay. Now let me put myself in the manager's position. And you come to me and you say, well, you know, we've got this thing, and it is, I don't know, domain admin.
And that could lead to actually everybody in the organization being locked out of their email because domain admin gets compromised. And the first thing that the the attacker does is just goes and switches off email access for everybody, whatever. And the manager hears this and says, are we so poorly configured that that kind of risk could happen to us? And actually gets upset with the blue team as a result of that because they find that the the existing organizational controls are, you know, suboptimal, so to speak. Controls are, you know, suboptimal, so to speak.
Yeah. So that does things like that do happen, and that is so to some extent, it's something that can't be fixed without changing the culture from the top down. Yeah. I'm a big believer that if you want to be secure, if you wanna have good resilience overall, you really do need a sort of positive, cooperative, helpful culture in your organization because good security and good responsiveness really does depend when it comes down to it on, internal relationships between people, people working together, and people, you know, having the same goals, pulling in the same direction, and understanding each other's points of view. You see that over and over again in incident response in, you know, the the companies that do well detecting us are the companies that have that kind of culture.
But you will get people sometimes, as you describe, who, you know, end up throwing blame places. Well, that doesn't help. It's not productive. Again, it while it may happen, often we're able to intervene as part of that process when we're managing the resume. We say, actually, you know, this isn't necessarily a point problem.
This is likely a result of, you know, operational drift, shadow IT, people having business requirements that drift over time, and the emphasis is almost always when in in that covenant organization, the emphasis has been on productivity and people being able to work because everyone thinks a business. You can't make things so secure that people cannot work. Yeah. And these these decisions at an operational level have consequences, and those consequences often reduce security. And until you look at it holistically, you may not understand how an point decision in one department impacts the security of the rest of an organization.
And, Yeah. You know, we can make a case that effectively tries to take take out the sting from these types of discussions. You know we're not I said we're in the red team. We're never not on the side of the blue team. We we we are there because we want to make the blue team better.
We are there because we want an organization to be able to respond better. That's the point of us coming in to do this. Yeah. It's to try and improve resilience overall. We're not there just to make a point, go and then run off.
You know? Well, along those lines, I mean okay. So we've talked about, let's say, managing the management reception of of what that what comes out of this exercise. What about managing the process of kind of collaborating with the blue team afterwards and saying, look. This is where we were successful.
This is what we were able to do, what we're able to find. Do do you then think about providing them direct recommendations, or is it more like, hey. Here's what we were able to do, and then let them kind of brainstorm internally and figure out how they could have done a better job of detection or prevention, or how how do you what's, like, a best practice or a good learning there? So it's a bit of both is the is the answer. So, obviously, there's always a formal report from a red team, and we've got technical vulnerability, and we've got technical recommendation.
And that's our view of what is the best recommendation, the thing that fixes this. But that's not always practical to implement. So, network segmentation, this is a something that's a problem in a lot of on premise inf infrastructures, really difficult to fix because any organization of a particular size that has been around for any length of time, you end up with these sprawling networks. And often, nobody really knows what needs to talk to what. So actually segmenting the network properly is a really big problem Big challenge.
One that is not easy to fix. So then the quote the conversation with the blue team becomes, okay. So you can't do the thing that will prevent this. So can you do things to make this harder? So can you isolate, for example, the critical data silos or things that are really important?
Can you prioritize those and shunt them off somewhere that's different? And if you can't do that, you know, what's the next best thing? So can you put in place control so you're monitoring all those perimeters so that you know what's crossing perimeters and you can start monitoring anomalous traffic and things like that. And this process it's it's iterative you know we've got our best practice recommendation but that's not the only thing that you can do to reduce the risk and that sort of debriefing process with the Blue Team helps us have a brainstorm with them and understand exactly in their environment with their constraints what's practical to do. And Yeah.
Because there's what you can do now which is a quick fix. There's what you can do maybe in three months time, your your sort of medium term fix. And there's what can you plan to do long term that might mean, you know, rearchitecting your whole organization and, you know, reducing risk now. It's about reducing the thing that is the biggest problem right now that's gonna get you in most trouble right now. Yeah.
Prioritization. Yeah. It's prioritization. And we can help the blue team with that, and we do help the blue team with that. And that's that's probably where most of the value comes from is saying, right.
These are the things you've got wrong. Yes. It's a massive list. Yes. There's lots to do, but do that first and, you know, it's it's things like, you know, phishing resistant multifactor authentication is a big one.
Yeah. You know, lots of multifactor authentication is not resistant to phishing attacks and therefore in a certainly in a cloud based organization, is you're one phishing attack away from someone being able to grab a bunch of data. Bad situation to be in. But if you know who's got access to data and you can put phishing with this and MFA in, it's pretty quick to do these days at least for Yeah. A small set of users that are important.
And you can plan the role for anyone else, but you really, really make it a lot harder for an adversary to grab that data first of all. And, yeah, it's fine finding the quick wins and making sure everyone's on board. I mean, but ultimately the blue team our experience with the blue team sometimes when we start the blue team's resistant or they think they're being tested or they're a bit twitchy about what we've got to say to them but almost invariably by the end they're actually excited by what we've done, they're interested in how we've done it, they're getting involved in the process of thinking about what what we've done and what the tactics were and how they can start thinking like an adversary as well. Because that's what really helps the blue team is understanding how an adversary operates, which may be different to how they think when they're thinking about, you know, building systems or, you know, configuring systems, someone breaking them. Sometimes you just haven't made the right assumptions about what an adversary will do.
And thinking like an adversary really helps you. Yeah. Along those lines, I mean, there's something you said in there that I'm really curious about. So let's talk about phishing resistant to FAA or MFA. Right?
Mhmm. I think so many people by now are used to get a text message with a six digit code. Yeah. You know, that was kind of the best practice for a number of years there until people realize that, oh, crap. SIM swapping is a real problem, and it does happen.
But SIM swapping to me is a kind of thing where I almost wonder whether it is in an ethical gray area when you go into a red teaming exercise. You know? Or or maybe it's across the line. You tell me. And I'm kind of curious about what are some of the ethical, let's say, boundaries to red teaming, and how do you manage that for an organization where, for instance, you know, part of red teaming might be social engineering, part of red teaming might be checking physical security to gain access to a building or, you know, to plug in a cable in a network port in a visitor's conference room or something like that.
How do you manage that overall, and what are some of the lessons learned around that? It's a good question. I mean, the truth is that when we're red teaming, because we are doing it legally and we're doing it ethically, we are bound by constraints that an adversary is not bound by. And that means that we can't do everything an adversary can do because an adversary will happily break the law and, you know, we we can't, we won't. You know, since swapping is sort of the tip of the iceberg with the with the modifactor authentication, it's probably not something we'd do covertly.
It might be something that we'd look and simulate with the consent of somebody and say, you know, technically, is this possible? But a lot of the time, we have to, I say, separate ethically the part where we're testing an individual and the part where we're testing an individual. So you mentioned social engineering and this is somewhere where you can really you can really screw up and really upset people. So social engineering is something that's inevitably part most red teams and by social engineering we normally mean some form of phishing attack against some some kind of employee. Oh okay.
And you know it might be a web a sort of email phishing attack, it might be an instant message phishing attack, it might be something of that. But that type of social engineering is normally what ends up being involved in a in a red team because there's Okay. There's sort of initial access mechanisms that we can't use. So, you know, we can't, for example, go and compromise a website that people will go to. A lot of adversaries do this, you know, we can't do that without, you know, someone agreeing to host malware for us, which is very unlikely to happen.
You know, that's not something we can do. So, you know, we have to limit these things and we have to simulate them. When it comes to individuals, so the social engineering that we do with individuals, adversaries will do things like they will build up relationships via personal social media with people. They will go on LinkedIn, they will masquerade as people who are well known in the industry, make friends, and try and, you know, create connections and stuff like that before leveraging those connections to get their malware installed by this individual who is in a you know, they think they are talking to a real person and have formed a friendship. Ethically, that's very dodgy.
Firstly well, legally, it's dodgy to start, posing as somebody, in on social media. It's ethically very dodgy to start start crossing that boundary between someone's personal life and their private life or or their, their working life, and we wouldn't target anyone on personal social media as a result. And therefore we're limited to things like, you know, trying to send a Teams message to someone, you know, an instant message. That might be the thing that we do to their work address, and we might make it about work. We wouldn't make it about personal stuff.
So you've got this sort of personal private boundary we try to be very respectful of because Yeah. We all know if you go and do a bunch of open source research on somebody, you can find out stuff about them, about their lives, about their relationships, their likes, their dislikes. Yeah. Is it is it and the question that or the sort of ethical question in mind really is, is it appropriate to use any of that information in an exercise against their employer? And my answer is no.
It's not acceptable to do that. An adversary will have no qualms doing that. Yes. So everyone should be aware of that exposure because an adversary will do it. But ultimately, the reason it's not ethically or it's not it's not even practical to do it.
It's not it's not useful to do it. It's because I know and I'm sure you know and probably most people in cybersecurity know. If you have the right information about somebody, you have the information about somebody, you have the right sort of relationship with them, and you have the right hook to use, you can convince anyone to open a file. You can convince anyone to answer a phone call. You can convince anyone to do something that you want them to do.
Visit a website. I would be naive standing here even as a twenty year veteran of red teaming to say nobody could fish me because I'm sure if they did it right, if they put enough effort in, they could do it. And so it proves nothing If I fail the first time and I fail the second time, eventually if I'm determined enough I will be able to convince someone to run some malware or open a link or something like that. What's important is can you detect it when it happens? And have you got the technical controls, the right layers of technical controls so that, you know, it's really, really difficult for an adversary to do?
You know, can you prevent, you know, the download of the initial stage? Or can you prevent the execution of the next load, you know, the next stage payload? Can you prevent persistence? Can you detect persistence? Can you detect c two?
Can you prevent c two? These layers of technical controls are the things that are important because if you rely on people and there's a there's a real temptation to blame people for being human I think in this instance. Yeah. Yeah. You know we're moving into a world where AI deepfake generation is so good.
I do not think we are going to be able to rely on humans even in this conversation here. You would not be able to rely on me being human. I wouldn't be able to rely on you being human. We're at a point where you know you can't believe your eyes and ears anymore. You cannot expect humans to after you know millions of years of evolution where we rely on this type of communication to be so on guard on every interaction that they are incapable of being fooled now.
And you can't rely on it. And that means you need other controls. And that means technical controls is where you need to focus. You can't blame people for being human. We are human.
Yeah. Yeah. And we've had a couple of guests on the show talk about kind of some of the real challenges and and shortcomings and especially that point that you said about blaming people for kind of making human mistakes. I mean, look. I'm I'm like you.
I'm a 20 twenty plus year veteran. I think I may be getting closer to thirty painful as it is to admit this. And I know in the last, like, six months, I had two experiences where I came very close to falling for something, and I, you know, kinda kick myself afterwards. And then I'm also asked myself the same time, one was a website I almost went through with the transaction, and the other was a piece of mail that showed up at my office that looked totally legitimate totally legitimate until there was one small thing that I found on there. And, you know, I'm somebody who likes to think of myself as extremely security conscious.
We you know, we're a cybersecurity company, and we are also a company that holds ourselves to very high standards in terms of SOC two and ISO twenty eight seven thousand one in GDPR practices and whatnot. And we go through all the security training, and we go through all of this. And at the end of the day, to your point, we're all human. And if something is well enough crafted and if it is convincing enough, you're a human, and you're going to react in a way that you think is rational and correct for this situation. I I have a follow-up question around social engineering.
You mentioned that a couple of use cases, and there's one that you didn't mention that I'm kind of curious where it falls in your mind, which is, you know, calling up an IT support department and claiming to be a new employee who needs a password reset or needs access to, you know, your email for the first time or or a new username and password provisioned, those types of things. I hear about that one pretty regularly. Where does that fall in your ethical scope around red teaming? We do those things quite regularly, but it's always a case of being careful, and working with the control group so we know, you know, who we are going to be calling, who we are going to pretend to be when we call, and Yeah. There is, you know, a framework in place so that, you know, if this gets flagged as an incident and a response gets kicked off, there is somebody at the top of that escalation chain who knows that what we're doing is a simulation and can stop things escalating out of control because it's a simulation and there's not an actual incident going on.
But, yeah, you know, we will we will ring the IT IT help desk, and we will pretend to be people and try and get passwords reset at least where we have permission to do this from, you know, our control group is managing the managing the, the operation. Regime engagement. Yeah. And the and the thing that's most important is that, you know, in that situation, what we're doing is we are, yes, we are pretending to be someone else, but we are Yeah. Using effectively corporate services.
We are talking to people about their jobs. What we don't do is try and work out who's working on the IT service desk and anything about them as a person. Yeah. And sometimes vice versa as well. We may call up, if it's in scope, employees of a company, if we can find numbers, and masquerade ads, the IT help desk, and see if that will get people to install software.
But, you know, we're really careful when we're reporting about this and how we actually pitch this because where it goes to the point, that there is something in writing about what's happened. We don't want it to be about, you know, Sarah down the road who fell for someone ringing. We want it to be about, okay, so you've got these technical controls that didn't stop us from doing this, because, you know, people are gonna fall for things. And your your control assumption has to be at some point someone is going to fall for a social engineering attack. Your your layer of defense cannot be a person.
Yeah. It's got to be in other controls in process or in technical in technology. Because Yeah. You know, if you're relying on people and blaming people for being people, you've already lost. Yeah.
Yeah. Yeah. I'm I'm curious. Do blue teams usually know when a red team exercise is going on? Not always.
So in fact, often they don't. Someone Okay. Normally someone right at the top, the last person that will be escalated to, normally they know a red team's going on, and they don't tell anyone. And the reason that they know is this uncontrolled escalation thing because, you know, if you're in a if you're doing a simulation, what you don't want is an instant response by the blue team escalating to the point where, you know, someone outside of the organization is gonna be notified. You know, if they think talking about notifying the information commissioner's office or something because they think the breach has occurred, someone needs to stop that happening.
Bring everything back and say, this is a simulation. It's fine. Nothing's been breached. That type of thing. Yeah.
So someone at the top normally knows, but most of the blue team don't know. Sometimes they know there's a red team, but they don't know when. Sometimes, say, we're we're sort of collaborating. They will know something's going on. And rather than, you know, being covert about it, it, we'll be sharing information as we go along and saying we've done this, we've done this, have you seen it, have you got any alerts, have you got any alarms, and that's when it becomes more more purple.
But it's a bespoke process, so, you know, we adjust our approach to meet the objectives of the organization. And when you're when you are trying to work out in a realistic way whether you can detect and respond to an incident, not telling the blue team is the best way to get that that objective covered off. Because if they don't know it's coming, they react as they normally would. Whereas if they know it's coming, quite often there's some bias in how the blue team will react. They'll be, it's it's a simulation.
We won't bother analyzing that properly, or it's a simulation. There's no real danger. We can chill out a bit. You know? Yeah.
Yeah. Fair enough. Beyond red teaming, are there other exercises or other types of activities that you engage in that help organizations uncover where they might have, let's say, weaknesses in terms of planning a response plans? Like, do you run, let's say, simulated incidents or simulated attacks or, let's say, like, assume breach scenarios and, you know, kind of responses and tabletop exercises around that? And do you find those to be as useful, less useful, or useful in different ways?
How do you think about all of that? We do all sorts, and it is useful I think useful in different ways is what you've got there. So, like I say, the the tabletop exercise is a good one. So we often run simulated incidents where we get everybody in a room, We say it's often off the back of a red team, actually. We'll we'll we'll simulate that something has happened that might have been plausible in the scenario of the red team, and we'll say, right.
You've had a notification that, for example, this chunk of data is on the dark web. Nobody's seen it go, but this chunk of data is on the dark web. That's where you start, and then you're like, okay. So how do we identify is that our data? You know?
Do do we know where the incident response plan is? Does everyone know their role in this response? Who who's gonna communicate with whom? And, you know, who's you know, all that type of stuff. Yeah.
Make sure everyone's well practiced. And it goes outside even just the blue team as well because something that happens in an incident that, you know, not everyone's immediately aware of is all the other stakeholders in the business. So you've got people from legal, you've got people from marketing, you've got people from finance often, and they all get pulled in. And quite often they not talk to each other very much. Yep.
So actually doing those tabletops is a really good way of getting people to know each other and like literally know each other so they know the person they're gonna be talking to when an emergency happens and that's often really invaluable. And the other thing other things that come out of tabletops that are really interesting to discuss are things like, you brought you pull everyone into a room and then we simulate a ransomware outbreak and the first thing we do is take everyone's laptops away and say right. You have no laptops because they've been ransomed. And often, they've never tried to run their incident response plan without their online files. And that is something that you you don't really necessarily think about immediately when you think ransomware attack.
It's something that if unless someone points out, you're not gonna have access to this. So what's your plan when your laptops are dead? Or even, you know, it's not even, you know, cyber incident. The thing that happened with, with, CrowdStrike, you know, that bricked everything. But again Yeah.
Yeah. Same sort of thing. How do you respond when nothing works? And lots of times that's not been practiced. So that that's really useful stuff.
And another thing that's also, useful differently is attack path mapping. So obviously with red teaming, we we're looking for a single attack path. If you work with an organization to map attack paths, you can come up with, you know, rather than the one that we look at in a red team, you can come up with hundreds that you can then see, okay, if this if this pathway were executed, would we see it? Have we got controls in the in the way that we prevent it? And effectively, look at your defense and prioritize them in terms of all the attack paths you've mapped and how that might impact, you know, the end goal that an adversary might want to make.
So Yeah. Yeah there's low loads of things of different services that kind of impact on resilience in different ways. But the beauty of I think a lot of cyber security is that you can customize a lot of stuff. You just need to know what you need to achieve. If you know what you need to get out of it, there's a way to sort of create a package of work that will meet those needs and that will give you something valuable and actionable off the end of it.
If you don't know what you want when you go and commission a pen test or a red team or, you know, an adversary simulation or something like that. If you don't have a good idea of what you want to get out of it, you're not gonna get the best value from it. But, I mean, that on its own opens a whole can of worms that I don't think we have time to get in today. But to your point, like, a lot of organizations, they don't know what they wanna get out of it. They don't know what an acceptable level of risk for them is.
And and, you know, kind of that balance between the amount of, let's say, controls and investment and, let's say, restrictions and limitations that they put in place versus the security outcome that they hope to gain out of it it is very challenging for a lot of organizations. And like I said, I don't know that we can get into that today, but I totally get where you're coming from. And I do think to your point, though, what it what it sounds like if I'm hearing you right, what you'll get out of these sets of exercises is you will get a pretty good picture of how things are today and what some of the changes that you could make would be and then, you know, drive towards that and start to make a decision process around that. And and I think, you know, one of the things you said about that is it's very customizable. I actually tend to think of the cyber landscape as being one where, like, the technology landscape shifts so quickly within most organizations.
And I tend to work pretty much with digital organizations. I don't tend to work with a lot of physical organizations or physical goods organization. But with most technically, or or digital organizations, they're changing the technology that they use so frequently that you have to reassess and reassess, and it does lead to very interesting topics to go learn about and research. And so, you know, never a dull day in the in the cyber world. I I'm curious just, you know, kind of one follow-up, or question to wrap up this episode and to wrap up our conversation with red teaming.
Any stories you can share? Have you ever been into a red team scenario where you really didn't find anything? You came out of it going, oh my gosh. This organization is amazing. We weren't able to do anything or anywhere you just found something really unexpected and quirky that you're able to talk about, you know, respecting customer confidentiality, obviously.
I think so this isn't necessarily a a specific story, but this is something that we've observed. You say you work with digital organizations. Yeah. So red teaming, when you've got a fully digital zero trust organization and what you've got is not a an infrastructure, you know, service or anything like that. I'm talking about Okay.
An organization that is a set of software as a service components bolted together by APIs, hosted by various providers with, you know, different terms and conditions and things and different ownership of assets and what have you. It's really really complicated to do red teaming in any kind of traditional way. But the thing that we come across over and over again which is really interesting is that the the sort of you do not have anywhere near the capability that you have in any traditional architectures to actually detect or respond to anything. You may not have you may not have a ability to even identify when you sort of delve right down into it. You may not be able to identify who's got an active session on a particular system.
You may not be able to understand you may not have any mechanism to terminate a session when you know it is compromised. You know, that session may exist until expiration. You have no control over it. You may have no no logs. You may have no logs and no access to logs.
And Yeah. These limitations often the driver for this type of sort of building is cost and responsiveness and productivity. And it's those security requirements haven't even come into it and they never do. And then you're in a situation where you're one phishing attack away from disaster, which we again, this comes back to this with an organization that we demonstrated. So we did one phishing attack, landed up on, by coincidence, on the account of someone that had access to everything, including all the customer data.
Every SaaS tool and all the data and every SaaS platform they're using. Yeah. And when they did find because we just picked the right person, and we were lucky. It was the first one. And when they had identified it and tried to shut us out, although they had a script that they thought terminated all the sessions, turns out didn't actually terminate the sessions that have been it would have stopped us logging in again.
But those sessions were Active sessions. Active for a full twenty four hours after this point. They had no way to shut it it down. And so we had twenty four hours knowing that they had caught us to download all the data that we wanted. And that's a problem for a lot of organizations, a lot of digital organizations, and it's not one with an easy solution.
It's not indeed. And, I mean, to your point, you're you're so reliant on what the third party providers give you access to. And many of them, for instance, don't have APIs to pull logs off of those systems or to your point, to initiate the termination of a session. You know, even if you had, let's say, a a valid set of admin credentials to a third party SaaS platform, there may be no way for you to kind of go in organizationally and say, log Gemma out or stop Gemma from downloading data. So you're very dependent on those third party systems.
And it's one of those things where, as somebody who runs a SaaS company, I can tell you SOC two doesn't cover this. GDPR doesn't cover this. ISO 27,001 doesn't cover this. So we've had our systems tested, and, you know, I think we may have the same kind of capability challenges around this space as many of the other SaaS providers that everybody's using on a day to day basis. And I think very few of them actually have the ability.
You know, maybe Microsoft and Google with with three six five and and Google Workspace have this ability, but very few that I know of have those. So that's an interesting challenge and interesting observation. Well, Gemma, this has been a fascinating conversation. I've really appreciated. We have kind of come up against time for today's episode.
To close things out, if you wanna share a little bit about where people can find your work, if they wanna get in touch with you, what's the best place for them to go check? Yep. So Cyberis.com is our website. I often post on the blog there. So, there's lots of my waffling about red teaming on the blog there if anyone's interested.
And you can also find me on LinkedIn. Fantastic. And we'll have Cyberris and the blog and the LinkedIn profile as well as Gemma's book that she coauthored linked from today's show notes. If you have anybody that you'd like to see on the show, we do have some guest slots opening up in the coming months, so please feel free to refer them over to us here at Modern Cyber. Gemma Moore, thank you so much for taking the time to join us today.
Thanks, Jeremy. It's been a pleasure. For me as well. Bye bye.