Kirstin Burke:
Welcome! Our topic this month is interesting. We’re talking about what a lot of other people are talking about right now, which is AI. And I think you’ll find in this segment that we do today how we’re looking at the world a little bit different. And the title for this is Human and AI Synergies. If you read the description, you’ll see that we used a different word in that acronym of AI. How we look at the world really is augmented intelligence, not artificial. And I think that really crafts a lot of what comes after, right? Where are those synergies? Where are they not? So where I’d like to start, Shahin, is why not artificial? Why do we think of AI as augmented?
Shahin Pirooz:
This has been a long time. I’m a stickler for definitions. So first off, let’s start with that. And this has been a long time battle for me dealing with this. Starting when we didn’t have generative AI yet, we were still dealing with very early nascent machine learning or deep learning models that really were just if-then-else paths.
And that’s when the world started identifying things as AI in the context of artificial intelligence. The problem I have with that implies that there is an intelligence that has been artificially created. And the reality is that the systems aren’t intelligent. They’re following rule sets. They basically are taking and they’re processing information that comes at them and going through decision trees to figure out how to respond to a thing.
We can argue that that’s all our brains do, but our brains take and they interpret the inputs that are coming at us. They process those based on our experiences, based on the things we’ve seen that match that pattern and the behaviors we’ve had in relation to those patterns, how we reacted to them, was it a positive experience, was it a negative experience. And in order to get to the point where we can call something actually artificial intelligence it has to be able to take artificial human experience and apply it to the knowledge it’s processing, and I think that’s missing in what we today call artificial intelligence.
Why I like the term augmented intelligence is because these systems are designed to augment what we do. They help make us better. They accelerate what we do. They allow us to do things faster, but they’re an augmentation. They’re not a replacement or displacement of human intelligence.
Kirstin:
Well, and I think you see that throughout all of the different industries and organizations that are applying AI or machine-like features to their product or tool sets, right? Whether it be a sales app where maybe they’re helping a salesperson collate information from CRM and it translates into action items, right? You don’t replace a salesperson that way because you still have to have that person who has the relationships, the intellect, and the context to make that information meaningful. That it has taken a bunch of work for them and put it into a package that makes what they need to do easier, faster, more efficient.
Let’s talk about how we apply that to cybersecurity, because I think we’re seeing a lot, we’re hearing a lot. There are a lot of tool vendors out there now, same as any industry, talking about how they have now AI enabled what they’re doing. Where do you see kind of the benefits and challenges of where we are right now?
Shahin:
The short answer is it’s not very different than the sales example you gave. One of the most promising AI enabled systems I’ve seen recently is SentinelOne just released their purple AI in their early beta or early testing, if you will. It was very promising on paper. It was very promising on demos. But when you actually get into it, you realize that it is a great generative AI tool that’s using multiple large language models on the back end, and it allows a analyst to interact with it as if it’s interacting in a typical question and answer model, just like ChatGPT, and be able to quickly get to threat data.
So the idea is that it can accelerate threat intelligence, threat investigations, threat hunting. But in reality, it really highlights and shines a light on that whole prompt engineering that everybody’s popping up on LinkedIn and trying to do. Take our prompt engineering class. Learn how to prompt ChatGPT better than anybody else. Get the answers you want. It’s like going back in time to when Yahoo was built in a university website. And I remember it was like Tilly Yahoo. We were so excited about it. And we would ask questions and we would get wacky results until you learned how to ask the right questions. And until you learned the way Yahoo-
Kirstin:
The language it spoke.
Shahin:
Exactly, wanted to be asked. And then similarly, Google came out, and Google, you know, crushed the world of search. And same thing though. You had to become a Google prompt master in order to get the proper search results you want. The order you ask things was important. And so fast forward today to generative AI, which is now, the same leap forward from being in a text-based web to Google being able to search images, text, everything, to now we’re having human interactions with a search bot. That’s effectively what ChatGPT has become with more capabilities to be able to do things for you based on the knowledge that’s on the internet.
So the same thing applies when we’re talking about all these tools in the context of security. We have to learn and teach how to prompt. So at what point do we start turning our very experienced seasoned analysts into, from this white collar role of very savvy security people who know how to find threats and anomalies, to prompt engineers that are simply just prompting an AI to go do the threat hunt for them. And I don’t think it ever really becomes that because what’s missing, I recently read a book And I can’t remember the name, so I won’t tell you guys. We’ll share it with you another time when I remember. But it talked about the evolution of human communication. I just remembered it. Super Communicators. Great book. Go check it out. But it talked about the evolution of human interaction and communication, and what are great communicators and what are not, and how we have over centuries and millennia learned how to read facial expressions. The little ticks and things that people do, which way their eyes look, you know, all the little things that help the communication. How is a computer prompt going to read any of that in an interaction? And let’s take that and apply it to data. There is ticks in data. Those ticks are, you know, we talked about, I don’t know, it was actually maybe last year we started talking about the interstellar dust between, intercyber dust, the noise in the network that we try to find threats and anomalies in.
And those ticks are what analysts look for that are just maybe something, we don’t know for sure, but it could be. And then we dig deep and hunt and look for, is there something else there? Was there anything that led me to feel uncomfortable about this thing? Is it possible that we can train an AI to find those ticks? Yes, but we would get so many false positives and the threat hunting-
Kirstin:
There’s still someone that would have to-
Shahin:
Evaluate the results, yeah. So, and it’s really like, if you look at how the world has shifted from, and it’s, what AIs are good for ultimately, or machine learning or whatever you want to call this thing, augmented artificial, machine learning, whatever you want to call it, at the end of the day, it’s the ability to apply rule sets to data and information and turn the bits and bytes into something meaningful that we can take advantage of. What it’s really useful for is pattern recognition and pattern implementation. By that I mean automation. What’s the problem with pattern recognition? We moved away in security from pattern recognition because it’s really easy to avoid detection by a bad actor if they know what pattern we’re looking for.
So examples of this are we moved from an endpoint security model of definitions and signatures and email security model of definitions and signatures and specific images and specific URLs and all these things. And the bad actor said, I know what they’re looking for. Let’s stop doing those things. We’re gonna create a completely file-less attack. So there’s no way to check if the file is malware or not. And now we have to shift to behaviors. A lot of security tools, and I will say incorrectly, shifted to a pattern of behavior detection. As long as the bad actor does A, B, C, D, E, F, G, that’s a bad thing. We’re gonna stop it. But what happens if they do C, D, A, B, G? Is that bad? It’s still all the bad parts. And that’s where a human comes in. The human says, all of the moving parts happened. They just didn’t happen in a normal order. Are they doing something they shouldn’t be doing? Let me go investigate. It’s that curiosity that’s missing. That human curiosity that says, when I said this to Kirstin, instead of smiling, she crossed her eyes and she furrowed her brows. And I don’t think she interpreted that the way I intended it to be. And that is a hundred percent missing in these models and these augmented intelligences.
Kirstin:
So if you are an organization who is looking at, I don’t want to say the next generation of security tools, but you know, your security vendors right now are probably coming at you and saying, Hey, our next upgrade is going to be this or whatever, you know, SentinelOne, you gave the example. How would an organization evaluate, right? You’ve got your security tool, you have the AI integration that they’re doing there. Then you have their people and this security vendor may be saying, well, Hey, you don’t need your people to do X, Y, Z anymore. Like how do you, as a business consumer, extrapolate really what you’re getting? You don’t want to think, oh, hey, I can redirect three of my five people, but then you have that gap that you’re talking about. How do you see the synergy between what the market’s bringing out and how an organization needs to set themselves up to use it?
Shahin:
It’s a great question. Our friends over at CRN wrote an article recently that basically, I’m sorry, it was a Forrester analyst that wrote an article that said, can AI replace the SOC analyst? And her conclusion was, no, it cannot. And she basically said a lot of what we’ve been saying today. And whenever anybody tells you the following things: our tool is the answer, you don’t need anything else. Hit the road. Or understand that that’s not true and you’re going to need multiple tools to solve a security problem. Security is not a one arrow solves a problem space, unfortunately. You have to have multiple layers of defense. Whenever a vendor tells you that with our tool you’re going to cut out the need for a SOC, not possible. And not possible because you can’t, what we just described, you cannot have a tool that doesn’t have a human inspecting its outcomes. It just doesn’t exist. It’s not a viable thing.
We have so much AI, augmented intelligence, all of that built into all of the platforms we use. We have about 40 tools in our security stack. And each one of them has their own implementation of AI. And not one of them can run standalone. I have been dying for the moment when somebody creates that tool so that I can scale that much faster and not have to have as many analysts do the job. And we do a tremendous job because we put a security orchestration platform behind it to collect data and apply our knowledge, our rule sets, and other things to what they find. But ultimately, you still have to have a human inspecting every single one of the alerts, every single one of the alerts. Unless, it’s 100% phishing email, 100% bad URL, and you’re going to block it. And then you can just simply say, if you see this, go block it. And that’s just an if-then-else. There’s no real genius to that.
So, yes, we use orchestration. And long before AI, we used orchestration to do a lot of that automation. And I think it’s important. Now, what can AI do? It can accelerate our time to be able to do threat hunting. When we have that little thing, that little piece of dust in the cyberspace that tells us there’s something wacky here, we can start using the generative AI tools to start saying, I see this dust. Do any other systems have this same pattern? And be able to quickly search the data in the database in the underlying data lake to understand, is this pattern repeating anywhere else in my environment? And what else is connected to this pattern? What are other things that might be tied to this that are going to the web to a known bad address or whatever? So being able to extend from that and threat hunt, if we can accelerate that, I’m not going to take out full analysts, but I will take out percentages of analysts and I’ll be able to get to threats faster. Like today, we’re very proud of taking six months of dwell time down to six minutes. If I can take it down to six seconds, that would be pretty amazing, right? So I would love to take factors of time away from the bad actors, because the faster we can catch them, the faster we can isolate them, the less impact it’s going to be to the customer, and that is entirely our mission.
Kirstin:
Well, and I think as a security company, there are the resources, the expertise, to be able to focus all of your energy and attention on that, right? You know, just hearing you talk about each security tool adding its own AI element to it got me thinking. So if I’ve got five security tools and they’ve all added an AI component if I’m on my own, I don’t have someone like a DataEndure or somebody else, but if I’m on my own managing those tools, do those AI components work together, talk together? How do I get the benefit of those AI models? And then I started thinking, oh, my gosh, I’m getting more complexity. What kind of person do you need? What kind of team do you need? Now that AI is being baked into these platforms, which in a sense is fantastic, you’ve got the acceleration benefits and all of that, but how do you, if you don’t have a managed service provider helping you, who do you need and what do you need to help get the value out of all that?
Shahin:
So, I’ll share one story. We had a client who decided that everything’s been great, they’ve had no attacks, there’s been no threats, they can handle security themselves. And they decided to move away from the service. Six months later they came back and they said, “Oh my god, this is a lot of work. Can we come back please?” And that’s in slight jest, but the reality is that it is a tremendous amount of work to get these systems to work together. It is, to be able to interpret the alerts uniquely from the different systems and then figure out how to correlate those alerts with other alerts. If you’re just a individual company that is trying to do that with the stack of let’s just say five tools, not 40 like us, you now have to determine if you don’t have these tools integrated in any kind of centralized console. If I see an alert here and an alert here, are they related? And you have to jump back and forth between consoles to do it. Is that the same attack? Is it a different attack? Am I getting attacked by two different people? Those are all the things that a security operations is doing behind the scene.
I used to use the duck feet. We’ve talked about it here a couple of times. And I heard, I think it was in the same book, oh it was a different book. I read a lot. It’s Present the Swan. White, beautiful feathers on the outside streaming across the soft, really still body of water, but the feet are going crazy under the surface. That’s the SOC. It’s a thing of beauty. And this, like this customer, they felt everything’s fine. We haven’t had a single problem. We haven’t been attacked. The volume of tickets is low. Everything’s good. We don’t need you guys anymore. Life must have gotten simple. And in reality, as soon as they put their head under the water and saw what was happening, they’re like, holy cow, there’s a lot going on down here. There’s alligators. There’s snakes. We need to be careful.
So the short answer to the question is there isn’t a system that allows these AIs to talk to each other. And in my wacky brain, I start thinking these systems are all going to start fighting with each other about who’s right and who’s wrong. But in reality, that’s where the human factor comes in to say, I’m getting this information from multiple sources. I want to take this information and overlay it and find out where there’s pattern matches and when there’s not. And not even a SIEM does that well today. There’s a lot of platforms that try to take information alerts from all these systems and pull it together. The reality is that the data models from each of these systems and the alerts they send aren’t in a simple format to overlay. So you now have to convert the data from them to a normalized format that you can say, okay, here’s the sources, destination patterns. Here’s the attack type it’s doing. Here’s the MITRE metric and then overlay that and say, okay, that looks like a pattern match. That looks like the same attack.
It’s getting six different systems coming in through email, endpoint, and Active Directory. We got a problem. And that’s what we do. That’s literally what we have to do in order to address this problem across 23 countries, 25 countries, and five continents. And we see security threats that no one single company is gonna see in their lifetime.
Kirstin:
So what would you say, so my takeaway here is, there is a place for augmented intelligence. It does not displace the human factor that is necessary to have an effective security posture. So that like big glaring neon light, that is a takeaway. For an organization that is trying to figure out how to get that mix right, what would your recommendation be as we close out?
Shahin:
So, it’s really all about scale. For smaller companies, I don’t mean to be rude, but don’t waste your time. It’s this is, you’re never going to be able to get economies of scale to build out a security stack that would solve your problem. You must talk to an MSP and hope that they have good solid security experience to be able to solve your problems for you.
When you start to get to, realistically 10,000 employees, but let’s just say somewhere between five and 10,000. At that point, you probably have invested in a lot of tools. You probably have invested in a decent security team, a decent CISO or security engineer that knows what they’re doing. And you now need to put process, procedure, security orchestration in place to be able to manage all these different tools.
But the world is getting to a point where just having 10 consoles isn’t going to solve the problem. You need to correlate that information. So now you need to build your own data lake. And when you build your own data lake, you need to build your own correlation rules. And SIEMs just are, I wrote an article not too long ago that said the SIEM is basically dead. It’s not going to make it into this next gen of security because fundamentally with the advent of, I was going to say XDR, and that’s a whole together, another marketing issue I have with the world today. But with the advent of what XDR is supposed to be and where it’s supposed to go, the concept of a SIEM really goes away, and you’re really looking at detection across multiple factors, and that’s embedded in the XDR platform, which may or may not include a SIEM. It may just be data correlation.
So as you go forward, you need to start looking at managed solutions. A lot of vendors are putting out managed solutions, but they’re single tool answers and they don’t manage any of your other tools. And you cannot, cannot, cannot do security with one tool. At all. There’s five layers of security you need to address all five: email, DNS, identity, Active Directory, and network. If you’re not touching all five layers and don’t have a security operations looking at the data from your, on average, 20 tools, you’re not doing security well.
Kirstin:
You’ve got vulnerabilities.
Shahin:
Yeah. So that was the long answer to say, it’s hard to do.
Kirstin:
Yeah.
Shahin:
Call us.
Kirstin:
Get help, get help. So, while we wrap up, Shahin is going to go to a baseball game and have a beer. The rest of us are going to have angst about what he just laid on us. But what I would say, and we mentioned this often, we’re here to help. We have different complimentary assessments that either help you understand your investments, your timeframes, any gaps that you have, and what the opportunity might be to move to a more secure platform that fills those gaps economically. We have another evaluation that helps you just very quickly understand where your vulnerabilities are now. So we can take a look at your ecosystem, what’s running, what’s working, are the controls you have doing what they’re supposed to do? Really, we’re here to help. Our goal is to help give the advantage back to the good guys. So take us up on it. Reach out. And with that, we will sign off and we’ll see you in July. Take care.