I think let’s get started and see people join in. So welcome to today’s webinar on demystifying the use of agentic AI in security operations. We’re really thrilled to have all of you here. Thank you for taking time out of your busy day to be with us. I am Vijay Viswanathan. I’m with the product marketing team at Ontinue. And I’m joined by Iris Safaka, who is our AI engineering lead. So Hello. Can we Hi, everyone. Thanks, Vijay. Yeah. Sure. So I let let me start with what we hope you get out of today’s session. And we have three objectives. First is, as the title implies, we would like to give you a clear understanding of agentic AI, What it is, what it isn’t, and also importantly, how it’s different from another area of AI that’s fast moving, that’s very exciting, which is AI assistance. And these two areas, agentic AI and AI assistants, they often get conflated. Sometimes, assistant AI assistants are mislabeled as agentic AI. So we hope to offer you by the end of today’s session a clearer picture of it. Then secondly, we’d like to actually show you the practical application of Agentic AI in SecOps. At Ontinue, it’s well over a year that we are applying Agentic AI to improve the effectiveness and efficiency of how we deliver managed set comps that the work that Iris and our team have been doing. And then finally, we’d like to touch upon a relatively recent announcement, the new Microsoft Security Store, that was announced a couple months ago. So with that, let’s get into it. So to begin, to understand AI in this first part of the today’s session, I’d like to compare and contrast it with other types of AI and look at it through the lens of a security operation scenario. So imagine this. It’s the dead of night, it’s two fourteen AM, and there’s a high severity alert that pops up. So this is a starting point we have, and now let’s walk through a few different universes, three distinct universes. So the first universe is deterministic automation. Second universe, and here we start to move into the probabilistic, it’s GenAI assistance. And then third universe, the focus of today’s presentation, agentic AI. So if you step into universe one, this is deterministic automation. So this is rules based automation. It’s where logic needs to be explicitly defined for this given input. These are the exact steps, and with that, you get a certain output with certainty. So think of this as classic automation. It’s if then rules. And it’s beautifully suited to a highly controlled environment. It’s extremely efficient. You know, as you’ve shown on the slide here, think of assembly lines. It also has applications in security operations. So let’s see how deterministic automation plays out in this SecOps scenario. Right? It can be some of high severity alert that pops up in the middle of the night. So based on the alert information, there’s initial threat intelligence based enrichment, and then there’s a series of predefined rules. So for example, checking the replication by p, the device risk score, and so on. And then one of the rules matches, so automation is able to see through to completion. Wonderful. But now let’s think of another situation. Suppose there’s an edge case. There isn’t the the logic encoded to handle this specific input. And in this case, automation can take things through the resolution. To summarize this first universe of deterministic automation, it’s very well suited to handle known scenarios and to drive the home to to really drive home this point of how well suited it is to a narrow set of applications, I’ll make this analogy to a race car. Right? On the left side, you see it’s gonna be the fastest on the racetrack, but the moment you take it off the road, it simply can’t operate. Right? It can’t handle all those bumps and unknowns. And so we bring us back to SecOps for known scenarios. Right? So think known alert types, known threat patterns. Deterministic automation has a very important role to play. It’s gonna be the fastest. It’s gonna be the most efficient. And that’s the reason it’s capability that’s widespread in SecOps. It’s default in many and is available in many XDR platforms, SIEM platforms, and, of course, SOAR tools. But as you all know, and as much as we want it to be, SecOps is not an assembly line. It’s dynamic. We have adversaries who are constantly innovating. They have new tools. They have new pack factors. So deterministic automation has an important role to play in SecOps, but the limitations are very clear as there are unknowns that are introduced. So that was teleported into the next universe, the universe of AI systems. And this is something that in the past couple of years, we’ve all become very familiar with. Right? We probably use it in our personal lives, in our work lives. They’ve really been transformational. They help bridge that gap between human language and machine logic. So with AI assistants, we can often just chat our way through two things. Right? We prompt them, and they respond. So we go back to our SecOps scenario. Again, the alert comes in. But in this case, let’s say it gets escalated to an analyst. And so analyst has a process in which they work through an incident. And this is where a Gen AI assistant can be helpful. An example of a Gen AI assistant that you might all, that you might have heard of is Microsoft Security Copilot. And you can imagine in a SecOps setting, right, we’ve used it in many other settings, but in a SecOps setting, a Gen AI assistant can be used to do things like summarize incidents. It’s extremely good at that. It can be used to write hunting queries. It can be used to explain MITRE techniques. So GenAI assistants, as the name implies, are great at assisting humans. They’re great at doing some of the tedious work in SecOps, but ultimately, they’re reactive. Right? They need to be prompted. And so with that, just to summarize SecOps, just to summarize the AI systems in SecOps, they can be used to accelerate triage. They can improve the quality, but on their own, they can’t run an incident to ground the way a human can. And that takes us to our third universe, which is the universe of agentic AI. And sure, we can think of AI operating not just as an analyst, not just as a top tier analyst, but really as a team of analysts. It’s AI that can plan, that can reason, that can reflect, that can refine and prioritize, and continuously improve based on feedback. So this is the AI that works towards a goal rather than simply executing rules. So, again, let’s take it back to our sec ops scenario. We have that alert that comes in. It gets escalated to an analyst. And, sure, you can see that Agentic AI can operate like a top tier analyst, like a tier three analyst. It can start with a hypothesis, can use a variety of tools to get more context, can run queries, it can evaluate the findings that are returned by tools. So it’s doing the grunt work of a junior analyst, it’s doing the advanced investigation and reasoning of a senior analyst, and it’s doing all of this automatically and autonomously. And that’s really where the power of agentic AI comes in. And so that’s why we like to think of agentic AI as more than a tool. We we like to think of it as a teammate. It’s a teammate that, of course, has the the benefits of a machine based teammate. It can work nonstop. It can scale intimately. It doesn’t burn out. And for all of us as defenders here, that’s an incredibly powerful paradigm in which to operate in, right, to counter the increase in attacks and attackers. So as I mentioned, at the top of this presentation, we have a demo for all of you. But before that, I’d like to summarize, this first part on these three universes that we looked at. Deterministic automation, rules based, it’s fast, it’s efficient, great for known scenarios. And we have AI assistants. They’re useful for support, but ultimately, they’re reactionary. They need to be prompted. And then we have agentic AI. And agentic AI has a high degree of cognitive autonomy. So one way to think about it, if you look at it left to right, on far left, there’s no cognitive autonomy, and as you move to the right, you get to higher levels of cognitive autonomy all the way at far right where it’s really advanced autonomy. So universe three is not something that’s distant. It’s not with this mythical reality. It’s actually something that Vedanta and Ubi are living in. We already live in all three of these universes. We use all three of these approaches in our delivery of managed psych ops. And what I’d like to show you in the next part of this presentation is how we apply agentic AI in SecOps. And one of the ways we do that is with our incident investigator agent. So I’ll stop sharing the presentation. Message switch over to screen sharing. Great. So what you see here is our SecOps work. So this is the platform that our analysts in our cyber defense center use to investigate and resolve incidents. So I’ll show you an example of what the experience is like for them with a specific incident and how the investigation is powered by Agentic AI. So if I click in through to this incident, you can see that when they land on this specific incident detail, the first thing they see is an incident investigation report. And you can see it’s made to be extremely actionable. The what’s presented here, the quick summary, and the next steps that they should consider is basically the the work that a top tier analyst would have done, but just done by, our incident investigator agent, and it’s done in a matter of minutes. And it’s surfaced to our analysts when incidents escalate to them so that they can see what might have happened and what they should consider next. So it’s extremely actionable in that sense. You can see with bullet points with key key elements of of the findings highlighted in bold and so on. What I’d like to also highlight is you see these buttons here. A key element of how we apply Agentic AI and continue is we have a human in loop, and that’s evidenced by the fact that the Agentic AI does the work, but it surfaces to a human, and they then use their judgment and see whether they agree with it or if, they actually have a different view on it. They can provide immediate feedback on this overall report, or as I’ll show you in a bit, on individual steps that the incident investigator took. So that’s that’s right here. They can give immediate feedback. But a key part of using AI in a responsible way is decision transparency. Right? So the our analysts don’t need to take the incident investigators finding at the at its word. They can go in and look at, okay, how did the incident investigator come to these conclusions? So there’s a range of information that’s presented here. I’ll scroll down to this section. They can see what is the hypothesis that the agent came up with, its assessment, the key findings, and then the part I think that really underscores the decision transparency is this section on evidence. So if they click in here, you can see the exact tool that the incident investigator agent decided to use, what input it provided, what output it received. They can run these queries themselves. So basically, everything is everything can be replicated, everything is auditable, they have full transparency into how the agenda AI came to a particular decision. Paris, I don’t know if you maybe want to mention something on this the feedback. No. I’m allowed to give some secret sauce to our audience. Right? I think, yeah, it’s great. Right? The powerful the it’s very powerful, the demoing base because they sometimes we remain at the sphere of abstract, but this saw increase in practice every day. It’s really, really powerful within. You talked about AI systems and how we move from reactive pro systems that need to be prompted to give us an answer to autonomous AI systems. And that’s what exactly what we see here. What we have implemented at Continuum is a fully fledged authentic incident investigator that is built using a lot of data in the tag, right, a lot of internal knowledge of Antinio, because you just mentioned before that, okay, the agent needs to come up by itself with the plan and the investigation plan and the hypothesis and come up with an assessment at the end. We all know that basic and the models do not possess all the knowledge needed, the specific domain knowledge for security operations to perform this task at the level that a human would do. So you just saw this little glimpse, right, with the like this like button here, which is what we get proactively, what we ask from the defenders in the UI. But what we also do is we see how the humans interact with this report and what else more they do for concluding an investigation apart from the from the implicit feedback which they give to the system. That’s very important. Right? So our defenders are constantly triaging hundreds, thousands of incidents every week. They this is knowledge that we capture and we give to the investigator agent here in order to plan future investigations. That’s one very important thing. We the second, you touched on it a lot. Right? We see see here in the output, we see some tools that have been executed. Right? So as every defender would do in order to conclude the investigation, they would need some tooling at their disposal. This is taking IP reputations, using some open source tool, but mainly going into the customer environment and executing sophisticated KQL tools, in our case, to get insights, like signing logs, device timeline, risk reputation for users, etcetera, etcetera. Now as the defenders need this tooling, The AI also needs and that’s another secret here that your agent is as powerful as also the tools it has at its disposal to gather information and eventually to take actions as needed. And we continue. We’re very lucky because we have a very deep integration into our customer’s environment through our platform to conduct investigations on the spot. We don’t need to ingest customer data necessarily. These tools will execute the query on the spot, will call an API, will use a sort an a tool internally or an open source tool as needed to gather the information and conduct autonomously an end to end investigation. Investigation. So, yeah, that’s just, like, two small secrets of success we have. Of course, we are in this beneficial position to have very good analysts and professionals constantly giving feedback and augmenting the system with their intellectuality and their actions every day. Excellent. Yeah. And, actually, I’d like to touch upon that third point. Dantini, we’re privileged to be able to serve hundreds of organizations. And you lead a team of AI engineers, and you don’t work in a silo. You work very closely alongside our security analysts, our cyber defense center. Right? So they we’re uniquely positioned to be able to develop these powerful agents. And they’re very, very, very happy to use those tools. Right? Because we know how defender teams, they get burned out with repetitive work for alert that they should have been unread. But for one reason or the other, maybe some automation is not in place yet or it’s not tuned correctly or there is new detections. You mentioned it in the beginning, like the edge case. Yeah. Actually, there are many edge cases. As we speak, we follow the new type of threats, even AI generated threats, we get tons of new type of detections every week to which point that automation might lag behind. Right? This is a known problem. As you said, the automation run-in perfect conditions when it knows what it has to do, when we know what we need to encode. And but when we hit that case, human defenders need support, and they will get it through AI assistance proactively or reactively. Let’s say, they would already work hand by hand with an agentic AI system, an AI agent, in our case, an incident investigator agent. But not only. Right? There there are threat intelligence agents. There are many agents that can help, as you said, as a small team, help the investigation and triage and response eventually of an incident security incident. Yeah. Okay. Excellent. So I I hope that that you all have a clearer picture of those three universes, deterministic automation, which I think is well understood, but hope that helped clarify the differences between AI assistance and agentic AI. And then secondly, to see how in practice we apply agentic AI at Ontinue. Now the third and final topic I’d like to touch upon is the Microsoft Security Store. So the Microsoft Security Store is a new security storefront. Microsoft announced this, I believe it was September thirtieth, and it offers a whole range of solutions. And I’ll share my screen, and we’ll show it to you live. But I think what’s especially relevant to today’s discussion is it has a section that’s dedicated to AI agents. And in that section, you’ll find a wide range of security Copilot agents. They’re agents that are developed by Microsoft. They’re agents that are developed by partners, including Ontinue. So let me quickly give us a live view of that. So if you go to security store dot Microsoft dot com, this is the storefront you’ll see. Ontinue. And you can see that we have a range of things that pop up. I’m gonna filter for agents, and this is one of two agents that we offer in Microsoft security store. This is our posture adviser agent. And looking into it via some details. But before we go into the details of this, Marius, I think it would be interesting for the audience to share the journey we’ve been on with Microsoft in developing of security Copilot agents. Yeah. Sure. The yeah. As you said, this the security store by Microsoft is a new piece that came out recently, but it it became available for everyone, general availability. That said, the conception and implementation of it started earlier this year. And, yeah, continue, we were very lucky to be part of a small group of partners, a small cohort that we work very closely with product managers of security copilot, security store from Microsoft to help them shape this new offering. This is an important piece from Microsoft that follows on the promise of helping customers, strengthening continuously their posture, monetizing on their investments. And this is a great opportunity for customers to look for, purchase, deploy, manage independently security solutions and agents, developed by Microsoft, as you said. Maybe you can, filter as we speak in the publisher drop down. If you remove on there, then we just have Microsoft. I don’t know how many of our audience have security Copilot enabled, but this is a great opportunity to to now independently deploy and access and see the outcomes of these agents. There are different ones. Right? We see the phishing test agent, which one one of the first came out, threat intelligence agent, data loss preventions for for purview, conditional access, optimization agents, etcetera. They now one can just click, deploy, and observe the outcome independently in their environment. And, yeah, that’s also what we wanted to do with this collaboration with Microsoft to understand how we can complement what Microsoft is offering, how can we create the agent specific to continue customers and their needs, and and helping our cyber advisory consult and help the customers advance in their security posture. Hence, this Posture Advisor Agent Core, this is a free offering. We don’t offer this only to continue customers since, as we speak, currently available into the security store. And, yeah, maybe you can show Vijay how one could get that. Sure. Yeah. And this is where the security store is incredibly powerful. As a security admin or as an IT admin, you can find agents that are relevant to you. And if you go down, basically, you can deploy it from right within here. If I hit get it now, switch to this tab, and you get to a screen like this, you can see choose you can choose a relevant billing subscription, resource group, and so on. And the power is this is this agent is deployed directly into your Microsoft security copilot with all the governance and controls that come with that. Right? So It’s worth mentioning also that the store is built on top of the traditional Azure marketplace for the ones that they know it and with all the benefits that comes with, right, this unified billing and all the extra benefits that Microsoft customers get through the traditional marketplace, they’re also applicable to the security store. Yeah. And if you’re a E5 customer, recently, was an announcement about security Copilot units being included with with e five licenses. So, again, this is a place where you can go to get agents that are relevant to your needs, and you can deploy directly from them here. Yeah. That’s an important topic. You mentioned the secure compute units and for the people that they have started security Copilot back in the days and they know about secure compute units. Now Microsoft did an exciting announcement in Ignite some weeks ago that all e five licensed customers from Microsoft, they are eligible to a specific amount of security compute units, SCUs. This is the units that are required for running security Copilot and the agents on top of it. Excellent. So you can find all this at security store dot Microsoft dot com. With that, I’ll stop sharing and see if there are any. Yes. Chris Taylor from Ontinue just shared our recent white paper on Agentic AI. Please have a look at that. It talks through some of the points that we discussed today, specifically the difference between deterministic automation, Gen AI assistants, and Agentic AI. As we give time to our audience to type in their question or just unmute them, right, we’re not that many. I think people can just unmute ask to be unmute them, how we can do this with you. Yep. Yep. You can Right. Unmute yourself if you’d like. Right. So as we wait for maybe people to think of questions or come up with questions. Important topic to mention here is with the security Copilot agents now, we have the opportunity to add run Agentic AI on the edge of our customers. That’s an important point. You will show very nicely how we have been implementing Agentic AI, developing agents internally as part of our ION platform. And we touched on the importance of having deep integration into our customers’ environment through tooling to perform autonomous investigations and other advanced actions. But no matter how much deep we are integrated and how much access we have to the customer’s environment, there are things that, for a good reason, we don’t have access to as an MXDR provider. And this is, you know, things that they need elevated permissions, like access accessing conditional access policies, for example, licensing information, defender settings, and all of these things. We certainly cannot perform automatically from our side, from our solution. That said, with this security Copilot agent development now platform that we have and distribution platform through security store, we are able to automate more and more insights and recommendations for our customers on their edge directly. This was an incredibly powerful feature that we were able to work with and continue helping our customers more effectively on their prevention trip mostly. Yeah. Yeah. That’s an excellent point. I mean, on our side, we’ve been using it abstracted a way to just deliver better outcomes for our customers with the MxDR service. But now security co pilot agents running directly in your environment, they have a deeper they have a deeper level of access, as Chris mentioned, and can so there’s a role for an agent to run in in both these distinct areas.