Your single source for new lessons on legal technology, e-discovery, and the people innovating behind the scenes.

Navigating New-Age Crisis Situations with a People-First Lens [Security Sandbox Podcast]

McKenna Brown

Subscribe to Security Sandbox

In today's episode, host Amanda Fennell chats with Nic Reys, partner at Control Risks, who heads up their global cyber and online threat intelligence practice and Relativity's intel-focused security architect, Darian Lewis, on how to best navigate the cyber crises that are going on in our world.

Transcript

Amanda Fennell: Welcome to Security Sandbox. I'm Amanda Fennell, chief security officer and chief information officer at Relativity, where we help the legal and compliance world solve complex data problems securely. And that takes a lot of creativity. One of the best things about a sandbox is you can explore and try anything. When good tech meets well-trained, empowered employees, your business is more secure. This season, we're exploring ways to elevate the strongest link in your security chain—people—through creative use of technology, process and training. Grab your shovel and let's dig in. In today's episode, our sandbox heads to the pressure ch­­­amber for a riveting conversation with Nic Reys, partner at Control Risks, who heads up their global cyber and online threat intelligence practice and Relativity's favorite intel-focused security architect and recurring guest, Darian Lewis, on how to best navigate the cyber crises that are going on in our world. How does threat intel play a pivotal role in your crisis program? What new threats should your team prioritize? So dust off your playbooks, grab a Red Bull, and let's dive in.

Welcome, gentlemen. All right. So we have a lot of threat intel on the call today that is ready to throw some weight around. I am going to start with a great intel-focused question. How do you stay adaptable to react to the unexpected? Which seems to be happening more and more these days. I don't think we expected a pandemic or Ukraine or lots of things. Maybe we did. I'm talking to intel people. It's probably a bad point. You probably both expected it, but how do you stay adaptable?

Nicolas Reys: It's a good question. I'm sure Darian will have a lot to say about that. I think it's largely about having your horizon scanned properly. I think the biggest difficulty that we see today—and particularly in the threat intel space where maybe historically we had less to focus on because both the attack surfaces that we were dealing with were smaller, the number of actors were lesser, and the impact was less felt across—I think the fact that interconnectivity across the enterprise and across countries has just grown so massively, it's made the job a lot—maybe not harder. It's a lot more interesting, I'd say. But it's definitely a bigger piece of the puzzle that we have to tackle. I think the how is really about challenging—I think as an intel analyst and in an intel function, you need to have dissenting voices around the table, and you need to be able to sort of challenge what may be a recurring bias that we all have. You know, if you've been in the field for a long time, we kind of feel like we've seen a lot of the same vectors and we've seen a lot of the same ways actors come around. And then something just changes the nature of the game, and we all turn around like, how could we not have seen this coming? One of the things that I certainly like to do with my team is kind of ask—particularly the people that may be newer to the field—to really come up with some blue-sky thinking around, what is going to happen in the next two years that you don't think we've been talking about? And then to sort of systematically go back through these—you need to have the time to do that. It's not a luxury that everybody has. And I think that, at its heart, should be one of the objectives of particularly a strategic intel function, is to really push the dial very, very far so that the guys who are on the front lines and the people who are dealing with keeping the lights on don't have to worry about, think about that and can action what comes out of this kind of thought process. It's nebulous, but, you know...

AF: Well, I mean, speaking of a dissenting voice on the call, Darian, what are your thoughts on this one? How are we staying adaptable?

Darian Lewis: Well, I mean, we're having to monitor far more than we ever have in the past. So we have to look in all the ugly places, the deep and dark web places for chatter about all the stuff going on—products, services, as well as our vendors—which once you have a vendor list over 100, it's just a nightmare to keep on top of everything. We have all the vulnerability stuff going on right now in the world, and I think even some of the things that we do as security professionals have been contributing to the problem. The rapid rise in bug bounty programs, for example, is bringing vulnerabilities to the surface a lot sooner. And they're all coming with proof-of-concept code so it's easier and easier for the bad guys to implement very quickly. So even though we're doing the right things for the right reasons, it still has consequences. And we also have to, because we're a software company, we also have to look for our own code repos to make sure there's no accidental disclosure of anything, no private information, no keys, the usual stuff, in there. So a lot of it is keeping your ear to the ground.

AF: But you brought up the landscape word. I think we're going to have a buzzer every time somebody says landscape on this call. But I will say, for today's...

DL: Is it a drinking word?

AF: It could be. It could be, Darian, if you need it to be. It's going to be coffee central. I guess in today's current threat landscape, there are a lot of different actors and vectors. Nic, you mentioned vectors earlier. They're changing, and they're doing things not necessarily that we could typically use their tactics to determine who it is for attribution and so on. How would we prioritize this in a crisis program? Like, I'm not, you know, necessarily based in the Ukraine, so it doesn't—it's not my No. 1 prioritization—doesn't mean I'm not thinking about how it would affect people that I have that are contractors there or nearby. So that prioritization factor—how does that come in?

NR: The prioritization piece should be—and it's always a bit challenging 'cause we're an industry that's been brought up on standards, and it's still standards that drives a lot of the thinking in risk management or in tech and are really important. But standards or generic prioritization maps should be very bespoke to our own sort of infrastructure or assets or business. I think, you know, for us, yeah, you know, we're not operating in Ukraine, but we may very well have critical suppliers, clients that are in Ukraine or they're connected to Ukraine in one way or another. So I think this sort of—if you look at the current threat landscape, I think what worries me the most and what really will, I think, keep all of us up at night in the next sort of three to 10 years is going to be the connectivity that we have and the ingress and egress points that we have across our networks that we just don't know about. And it's that supply chain vector. We saw this, you know, over the past few years, more and more supply chain or software update poisoning as a way in. It's way more difficult to deal with. It's also almost undetectable because it exploits the fundamental model of trust that we've built with our suppliers and our providers. There are things that are coming in the mitigation space that are going to be really useful against that. But, you know, will we ever do away with phishing? No, and they'll remain there. But that's ingrained in our memory, both as security professionals and I think even in the user base. You know, people get it more and more. It's that thing that we can't point to. We're never going to tell our users and we're never going to tell our enterprise don't update your hardware, don't update your software. We're going to have to deal with that in a different way. And I think that's, from a vector perspective, that's certainly what worries me deeply. From an actor perspective, I mean, take your pick. It's remarkable the number of players that are out there. And you've got the ones making the headlines. But if you think about any state intelligence agency today, all of them have programs. And if they're not developing programs, they're buying them. And it's offensive cyber. It's disruptive capabilities. It's espionage capabilities. The question is, based on who you are, who's going to come after you, or who's going to come after something that's in your chain that you need to be worried about?

AF: But—so it's the role of threat intel to forewarn us in these situations, right? And we spend a lot, maybe, at different places, certainly, we do here about threat intel and our focus on it and making sure it's implemented in process and everything. Why can't we get ahead of things still? Why aren't we more predictive?

NR: Yeah. It's my job as well, so I probably should have a clear answer to that. I think that the challenge is, I don't know, Darian, how you feel about this, but I've always felt that threat intel in cyber has often been given a bad rap 'cause we sort of assumed it would be, that it would be sort of predictive almost in a quantitative way. Like, we're going to know exactly what's going to happen, when and where. It is fundamentally behavioral science. You know, we're looking at the psychology of who's behind this. We're looking at and predicting human behavior. If we could do that, I think we would be doing very different things with our lives. But I think the reason why we can't today—and we actually do on many occasions, we just hear about the ones where we don't get in front of it—is because of that increasing complexity. And if the landscape is a buzzword, I'm sure complexity should probably be also a buzzword. But, you know, we have the fact that there is more and more players out there. There's more and more tech out there. There's more and more exploitation. And Darian's point on sort of pace of vulnerability identification is a really good one. But look at the—you know, 20 years ago when we were looking at, I don't know, the targeting of hardware, you had two or three suppliers to worry about. Now you're going to have two or three big producers that you have to worry about. Now you have hundreds of them. The rise of the OSINT—of the open-source sort of industry has also been, it's tremendous from a usability and a flexibility perspective, but it just opens more doorways. So I think, you know, it's kind of like law enforcement. You only hear when it doesn't really work. Part of it is it does work, and we do get ahead of a lot of things. And then the things that we don't get ahead of, the role of intel should really be about reacting quickly enough and being able to, when it does happen, contextualize and provide the actionability to people who are going to have to deal with the actual problem.

AF: Come on, buddy. How could you not have a lot to say after that one? You got cued up.

DL: Oh, I know. I have...

AF: Oh, no, he's ready. He's like a...

DL: I've got a couple of things to say. So I'm going to start with the first question about the current threat landscape and things to focus on. So vulnerability management right now is paramount. Those supply chain attacks have been the primary mechanism for exploiting vulnerabilities because they get a massive payout for the small amount of work. Compromising multiple organizations is a much more appealing way of going about things than trying to do it one at a time. People also tend to pay a lot less attention to cybercrime than they do to nation-state actors, right? So we always focus on these complicated, ridiculous things that are in our head that we think is what the actual problem is, when really it's 500 little ankle biters coming at you from different angles, trying to make a buck quick that end up stinging you in the end. The evidence just points to attack being far more likely to be cybercrime than it is for, you know, a nation-state actor, which has really specific goals in mind and rarely it's just to make money. It's usually disruptive or cyber espionage. As far as the role to forewarn, I mean, we're usually ahead by a couple days. Now, a lot of that has been compressed tremendously from the things that have happened in the vulnerability space in particular. And that would be, you know, you used to find a vulnerability, and then you'd have weeks to months to kind of deal with the issue. Now you got minutes because they're scanning for it five minutes after it's released to the public. And within an hour, they're trying to exploit. And having that PoC code available really makes that possible. And it's painful.

AF: I get asked a lot—how do you know your threat intel program is good? And my answer is because Darian says so. But if we can't...

DL: That's not a good answer.

AF: I know it's not. But if you can't measure it because we're not necessarily getting in front of things as much as we would all love, what is the measurement of a good program?

NR: I think, for me, the best, the times where I've turned around and, either in the work that we do or in what I've seen other organizations do, I've been like, wow, that's impressive, isn't about, you know, did you know beforehand? Because I think to Darian's point, there's a lot of things that there are, we know beforehand, and it's great. And if you ever find an intel function provider tag that knows everything 100 percent of the time beforehand, then, you know, I'll shut up, and that's the key thing. But if you don't, I think it's—there's two metrics. The first one is, how quickly, once you issue out request for intel when something happens, do you get actionable info out of your intel function? So how quickly can you—to Darian's point—if there's a new vulnerability out there, how quickly do I know, is it being exploited? Do we have the right remediation in place? Are we happy with things? And then the second element to that is, how comfortable are you with the breadth of coverage of that intel function? And I think that's one of the big challenges in our field of cyberthreat intel is, you know, we are getting questions that are about geopolitics. We're getting questions that are about regulations. We're getting questions that are about, you know, there was, with the sanctions issues on the back of the situation in Ukraine right now, you know, we've had intel functions being asked, hey, can we keep using this? We're not lawyers. So it's about the ability for that intel function to quickly, and fundamentally, as an intel analyst, your job is to be really good at identifying sources of information and contextualizing it and disseminating that to your audience. And if you can do that with enough breadth that you can cover the major questions that are going to come your way, I think you're in a really good shape.

AF: Go for it, Darian.

DL: You need to understand what's going on at multiple different levels at the same time. So at the geopolitical level where you know who the actors are, you have to keep your ear to the ground on that. You also have to be looking in the space that you're in to see what's attacking everybody else so that you know how to do it. The only thing I would add to what he has said is that I don't think of threat intelligence as a bolt-on to the outside of your organization, that it really has to have roots that reach into every aspect of your organization, from crisis management and communication to compliance people to vulnerability management, because we're a software dev company, right down to the developers, plus all the cyber stuff on top of that. And so you have to be an excellent communicator, and you have to be able to have those communications with people who don't know what a computer is, to people who are going to talk to you about which machine language instructions are actually causing the problem? And you have to be able to just move between those people seamlessly, even in the same conversation sometimes. And so those are—I think those are kind of my big points, right? So if you're not integrated in your threat intel program and you're trying this as a, here's a group of people that sits in a room by themselves and they dish out some candy every once in a while to you and reports, you're not doing it right. And so the way I kind of measure our success and how we're doing is, do people come to you for the answers? And when you give them the answers, are they the right ones? And how often are we getting it right? So I think we're doing a fairly good job there.

AF: I think there's a way you train your teams to do this, though, without having to outright train them. Like, you can put people through training all day and say, this is important and do this. But there's a couple of things that make people feel the pain of, like, you better be doing this. And one of them is, whether we had a team of 10 people back in the day and now 70, the same question would happen every time something would get escalated up in terms of a potential incident, I would ask two questions. Darian knows them well. I think he has them tattooed on his arm. No. 1: what is attribution? Which he hates that question because there's no way you know that when you first escalate an incident. But he—I still ask it—what is attribution? No. 1. And No. 2: was it targeted? These are the two things I always ask. You don't know either of those things immediately, but the fact that I always ask it as a CISO scares the crap out of everybody to immediately turn around and talk to intel. So then they learned we're not going to go to Amanda until we talk to intel, which makes them go there first. That's the great one. My personal favorite—we had somebody who was an analyst early on in their career and was trying to learn threat intel, and they came back and couldn't figure out something with an IP that they thought was, you know, they couldn't tell me attribution. They couldn't tell me anything about it. And they said there was no information out there. And Darian will remember this fondly...

DL: Google.

AF: I said, give me a moment. And I went and did something on my own and looked up and found out some stuff and looked back and then correlated with something that I knew was happening potentially. And I came back and said, that's our pen testers. You really should have found that easier and faster. It's really obvious. They're really literally in the building next door. Like, you could have found that if you had just done a little bit of thought behind that of, like, who's behind it and stuff. But the whole point was that I asked the question—attribution, and is it targeted? And if you can't answer those, which you can, you should be at least asking those. And it started to train the team that. So it was super, super helpful.

NR: I mean, it's the probably the most terrifying moment as an intel person is when you say, no, we can't find any information about this, and you've got—wait a second.

AF: Give me a moment.

NR: Yeah. Give me a moment to go and Google. But I think that, Amanda, it's a really good—these two questions, I think, are the embodiments of the challenge that we've often seen in crisis response and to response in cyber. Because it's a technical discipline and it's usually led from a purely technical standpoint, it tends—people tend to think about it in a binary way. And you see this all the time. And the benefit of intel is, it's not binary. It's not black and white. We may not have the answer yet, but we'll get it eventually. Or when we look at attribution, I mean, we saw this with the rise of sort of dual extortion motivations by criminals, which is fundamentally about sort of, hey, we're doing data leaks as much as we're going to extort you through ransomware and we don't know who's behind it. And it's just about the thought provoke-ness of having these questions out there and challenging what are—a lot of people who are very convinced, they know exactly what's happening.

AF: You mentioned earlier responding to something whenever we saw something comes in, you know? So in light of recent cyber crises, do you think it's better to move fast with some information or move slow with all of the information? To act or not act? And how could threat intel play a part in that?

NR: Yeah, I don't think you have a choice anymore. I'm based in Europe, but this is starting to happen everywhere. We now have regulations that dictate how quickly we can actually move in some areas where, you know, you've got 72 hours before you notify of a potential breach of PII or you're going to have the risk of sort of this spreading out of control. I think the notion of “let me wait for the perfect amount of information” is just unrealistic and will be static. I think the inverse to that is rushing to make a communication, rushing to escalate, rushing without at least a sufficient degree of information to say “this is bad, but here are the measures that we're taking” is also equally really bad. There was a breach in the U.K. years ago of—which I still think was a really good example of what not to do when a crisis happens. It was a telecommunication firm that had a breach. They didn't know the extent of it. The CEO was on the 8 o'clock news that evening saying we've been breached. We don't know how many customers are impacted. We don't actually know what happened. But we'll keep you updated. And the panic was just like—that's the thing you don't do. But I think—sorry, Darian.

DL: I want to I want to ask you something about that. So how do you deal with the flail? Because as soon as it hits the news...

NR: Oh, yeah.

DL: ...Everyone from the board to the admins are going to be asking you questions. It's how do you deal or how would you recommend people deal with the flail that inevitably happens where there's this drive to do something and it's going to be insane. It's going to be something like block every IP, but I'm just waiting for someone to recommend it so that I can go, “oh, please don't do that.” So what are your recommendations there?

NR: Yeah, I think in this scenario where, say, for instance, it's out there before you even had the chance to interact, I think proactive comms is critical, and I think it's very much about—it's in the same way as we think about it from a technical perspective. The first thing you need to understand is what the impact is. And you'll usually have a tiering. Once it hits the news, is this going to be reputational? Am I going to get regulators behind me? Am I potentially going to have, you know, my clients leaving me and then sort of proactively reaching or reactively, in that case, reaching out to these folks, saying this is how much we know and here are the two or three measures that we're taking. I mean, blocking all IPs —there were certain government departments that, for instance, reverted, cut the internet off when there was a breach. Let's revert back to pen and paper. You can do that when you're a government. I don't think many private sector companies can actually do that. But I think the idea is if it's hit the news, be candid.

DL: So how does that change in such a way—because there are two different situations that we see right now, right? One is that it's like vulnerability and, you know, it gets announced publicly and everything's going through great channels. And so you kind of see it coming. You've got some time to get your thoughts together. And then there's the ugly side of things where your crap shows up on a ransomware, you know, ransom site. And now the whole world knows you've been had. And so how does, how do you deal with those two situations? Because, I mean, we're good with our crisis management. We have templates for everything. We know who to tell what to. But how do you approach and how would you recommend other people approach those two situations which are vastly different?

NR: Yeah, I think—and both of them are, potentially, equally damaging if you don't do it the right way. Because I think if your customers or if your stakeholders can turn around like, everyone knew about this weeks ago. You've had weeks to deal with this. The exchange issues that we had a couple years back or, you know, if you weren't the first impacted, you've had time to patch. We see this a lot with legacy stuff where it's like, come on, guys, there's been years. You know this is sitting there. How on Earth do you let it still be breached or hit? I think in that scenario, it's really about sort of when the news—and this is where intel is in the driving seat. Intel should be, guys, this is out there, it's coming towards us. Action it. And the role of an intel analyst is also to pester everybody and continue following up and saying to a CISO or to a CSO, to whomever, have you, have we patched this? Because this is still happening. It's still coming. And I think if you don't do that, that's when you're going to have an issue because once it hits you, it's really about you haven't done the bare minimum of what's expected of security function, which is to patch and deal with what is a known known. We know it's a vulnerability. We know it's being exploited. We have the time to react to it. There were scenarios where, maybe because it's legacy high-availability servers that you can't really turn off. Then you need to contain and you need to put the mitigation measures in place. And if you can't, you better have a really good crisis communication plan because that's going to hurt. The other question that you had, Darian, and the other scenario of—we saw this law with AWS buckets sort of being left open and look at all the stuff that's out there. The dark web scenario of researcher with a good Twitter account sort of blasts your name on and saying, oh, we found data about you, that one's really about rapid response. And I think this is where intel's role in that scenario is exactly the two questions that Amanda’s put is—are we the only ones hit by this? Can we hide amongst the bushes in the forest? And who's behind us? Because what is actually at stake here is—it's not potentially just about the data breach because the data breach happened. It's out there. What else is on the network? What is actually sitting there? Is there still something open? And if we do anything, it might keep going. And ultimately, how do we communicate about this? But I think in both these scenarios, once you're hit, a lot of the battle is communication.

AF: It's such a segue to one of the questions I was going to ask specifically, which is about the communication gaps that take place during a crisis and how to get in front of that or deal with it. And I like what you said earlier about the—if it hits the news, be candid and be clear on what's being done. I've done some analysis before of the major breaches and the companies that were able to rebound from it. And it's essentially about were they quick and clear? Those were the two things that were the major ones. Is that the only advice you would give about a communication gap that takes place, or are there any other ones?

NR: I think a lot of it is people are ready for it. Like, who's actually going on the news? You know, in some companies, like, the CISO should probably be the one talking, but you've got a lot of CEOs that are going to be, well, you know, should I be talking? I think the biggest gap that we have isn't a preparedness. I think we're really good usually, especially security departments, we're good at doing the crisis management exercises that brings our guys into, our guys and gals into really challenging situations. But do we ever exercise what it's like to be in front of a camera when CNN calls and says, “hey, what's happened here?” That's a major gap. I think the other gap is know what the impact of the communication is going to be because I think it's also really difficult to define who you're communicating to. Are you talking to your shareholders? Are you talking to your clients? Are you talking to your employees when you're doing this? And all of these require different types of comms. So I think a lot of communication plans are, this is the narrative, but actually, there might be three or four different narratives in that communication. We often forget about the employees, but, you know, we've seen a number of cases where it's the employees, by not understanding what was going on, that made the situation worse because they themselves went on Twitter and said, oh, well, nothing's working. I think, that's public, but that's how the Sony breach was first recorded. It was an employee on Reddit that sort of went, guys, I have this on my screen. And, you know, part of this was timeliness, but also, you need to tell everyone. And it's a different message at times.

AF: Oh, Lazarus, who doesn't like to talk about that one?

NR: (Laughter).

AF: For the intel community, it's like the holy grail of...

NR: Oh, yeah.

AF: … Conversations. But since I've got you, let me use this opportunity as something else I came across in my analysis of post-breach activity, what companies were doing, and shamelessly exploit you for my job security. Should people look to fire security officers when there's a breach, or is there more to it? Because I've seen some companies that just go right for that, and I've always thought, that's odd. They don't do this alone, you know?

NR: It's a good scapegoat if you've got nothing else to do. But yeah, I think, we've had a number of prominent CISOs, CSOs that went under because of a breach or crisis. And I think the short answer is no, that if that is the answer, if that is the official answer to a crisis, it's the wrong one. There are instances...

AF: I was going to say, you missed something. If that was your scapegoat, you may have missed something, but yeah.

NR: (Laughter) Yeah, exactly. And it's also, I mean, to your point, they're not alone, certainly. And it's very rare that it is specifically the CISO's fault or the CSO's fault. You know, do you fire your—you know, I can't even think of a parallel because it just seems completely the wrong decision. I'm sure there are people who can point to, well, in this case, it was the CISO that did something really wrong. But fundamentally, I think the question should become increasingly, you know, when you look at the postmortem after the breaches, 90 percent of the time it's either a lack of resources that's provided security, there's just not enough either human resources, or there's not enough technology resources to actually be able to handle the entire enterprise, or it's a wrong culture. And I think if the answer is the culture is wrong, then there is a discussion about the leadership of the culture. But it certainly should not be the kneejerk reaction of, OK, well, we've been breached. Hey, public, the answer is we got rid of our CISO. We're fine now.

AF: Well, there's—oh, go ahead, Darian.

DL: I was just going to say, it's just never been a true statement. You're not OK, just like you can't say, well, you know, we've got this compliance document; we must be secure. You don't fire your insurance company because the hurricane came through and ripped your roof off—not their fault.

AF: I live in New Orleans. That could not be accurate. I might fire—(laughter). It's hurricane season.

DL: You might fire them for not paying, but you're not going to fire them because of the storm. And so...

AF: Yes. Yeah, yeah.

DL: ...It's just—it's a crazy response. I mean, what is the life cycle of a CISO now—two years?

AF: It was 18 months when I started five years ago. I don't know where I'm at, so I'm a ticking time bomb waiting to go probably. But I'm super happy, so we're staying with it. I'm having such a hard time covering the fact that I had so much more in crisis that I wanted us to talk through. So there may be an Episode 2 of this one. But I do have a few more questions I definitely want to make sure I get to. One of them—I'm going to give Nic an opportunity to sanitize the heck out of something. Can you give a recent crisis scenario that you have dealt with and how the response went?

NR: Yeah. I mean, we can talk about—to Darian's point, it's the 500 little ants and it's the criminal threat that's really the most...

DL: Ankle biters. Ankle biters.

NR: Ankle biter, that's right. That's right. Sorry, Darian. You know, we were recently deployed on one of the major ransomware groups, sort of combined ransomware and alleged data leak. And what I think was interesting —so this was a retail company operating in Europe primarily, or headquartered in Europe, but with global operations. And, you know, one of the really interesting questions that we had was they received the ransom note from that group in a typical fashion of, here's an email to a infosec management at, you know, the contact that you find on your website. It used to be on WHOIS records. It's not anymore, but—and the email alleged a data breach. I think this is one of the—again, one of the big questions that we're getting more and more is—there's also a lot of pretense out there, and there's a lot of groups that will say that they have data, and they don't have data. And how do you deal with that uncertainty? And I think the work that we did with the organization's security function, I mean, there was an operation, a two-level. There was an immediate technical investigation that was, hey, can we actually determine whether or not something had been taken? And, as Darian will know much better than I can, executives tend to think that that's an easy question to answer. It ain't always. It rarely is. So they were getting quite a bit of pressure from the C-suite of, right, when can you tell us? Is it true? Is it not true? And then, the second big question was, do we negotiate? Do we go in and interact? The ransomware component is interesting. And, Darian, I don't know what you're thinking nowadays, but more and more of the organizations we deal with, with the significant exception of industrial control system or OT networks, are much better at dealing with ransomware as an operational issue, sort of—there's better mitigation. There's better tech for containment. There's better remediation of the actual, hopefully, they have better backups these days. I don't know anymore.

DL: Are you sitting down?

NR: Yeah (laughter).

DL: 'Cause the bad news I have for you is that the statistic I think I posted just this week: two-thirds of companies last year that were hit with ransomware paid.

NR: Yeah, so this is...

DL: Sixty-seven percent.

NR: So this is the big, the big thing that I'm unclear about is do people pay because of the threat of data extortion, or do they pay to recoup operations? 'Cause I can't fathom that we're still in a place where, you know, we don't have enough resilience to deal with OK, well, we can contain this. But you're probably right. I mean, it's still a major hurdle. And the second big thing is that data leak extortion. And between the time—I mean, you're under pressure because, obviously, you've got a crisis. And this particular organization also had a regulatory obligation to sort of go there. And they had a 48-hour timeline from the alleged extortionists to release the data on their own dark website. And I think what was really telling was the level of chaos in that CMT was surprising. And a lot of our job was sort of trying to get everybody—hey, OK, GC, yes, there was clearly some regulatory implications there, but we have a shareholder implication. We have an operational implication engaging in the negotiations. We had one of the executives who was very clear that we should openly interact with this group and say, yes, we've been hit. We understand, and we would like to pay. We had another one who said, no, we shouldn't talk to them at all about it. And so I think a lot of this —your playbooks can prepare you for a lot. Your crisis exercising can prepare you for a lot. But the reality is, if you don't sit these people and confront them to these situations very regularly and push them and challenge them, it's going to be very difficult on the day.

AF: So there's—OK, there's another thing here that I have to ask 'cause you didn't mention it specifically, but it'll be my last question, I promise. How do you really focus on leveraging some type of tech that would help you with the people and the process portion of the crisis? Where's the tech help there?

NR: I mean, there's a huge tech component, obviously, in the operational response to the crisis. I think in terms of the crisis management, there is a lot of technology that's helpful in terms of coordination of the response. And I think surfacing information at regular intervals is really, really critical. During CMTs, we often see a fairly isolated team of executives who are sort of almost hermetic to any outside perturbations. But as we know, the technical forensics or the intel itself may need to get into that crisis room, and so I think having the right communication platforms. I think the other element that we often find quite surprising is—you know, technically, your network’s compromised. We're all working from home, working, you know, we exchange a lot of info still via emails. We exchange a lot on a potentially compromised networks. So having, potentially, technology to help redundancy and communication and sort of potentially segmenting communication can be really, really useful. I think the role of tech is it should ease the decision-making process. It's exactly like intel. It's not about press a button and we know what we're going to do, or press a button and then everything's contained. Tech should enable it, whether it's, who do we communicate to? How do we communicate it? There's tons of mass communication tools out there that do a really good job. But it's not going to replace what message we put in it.

AF: Nic, it has been a pleasure having you. I'm so excited we got to spend more time on this, especially on the air for once.

NR: Thank you so much for having me, guys.

AF: Darian, I couldn't imagine having this conversation without you. Thanks for joining once again.

DL: Oh, thanks, Amanda. I really enjoy these talks. I wish they lasted four hours, and then we could get to everything.

(LAUGHTER)

AF: Thanks for digging into these topics with us today. We hope you got some valuable insights from the episode. Please share your comments. Give us a rating. We'd love to hear from you.

Follow Along with Security Sandbox by Subscribing to The Relativity Blog


McKenna Brown is a member of the marketing team at Relativity, specializing in content development.

The latest insights, trends, and spotlights — directly to your inbox.

The Relativity Blog covers the latest in legal tech and compliance, professional development topics, and spotlights on the many bright minds in our space. Subscribe today to learn something new, stay ahead of emerging tech, and up-level your career.

Interested in being one of our authors? Learn more about how to contribute to The Relativity Blog.