Blog

The AI in ION IQ: Balancing Potential with Responsibility

In our previous series of blog posts, “The State of AI in Cyber Security,” we discussed the enormous potential of AI in Cybersecurity. Specifically, the advancements in Generative AI such as Large Language Models (LLMs) like GPT, enable new applications for AI for defenders. Such models can be used as a force multiplier, making defenders more productive. They can be used to generate training and attack simulation data, analyze data, and support the communication between defenders, just to name a few use cases where generative AI can help defenders.

However, with great potential comes great responsibility.

It’s no secret that generative AI models can sometimes err by producing hallucinated responses, which contain biases from their training datasets, or that generate overly generic or non-specific answers, and can pose risks related to data privacy and copyright. That is, if they’re not used with proper care and know-how.

As an AI-Powered MXDR provider, our top priority at Ontinue is to harness the power of AI responsibly, using AI to provide the best service, while protecting our clients and their data.

Human-Centric AI: Decision Support, Not Decision Making

Our philosophy revolves around a fundamental principle: AI is a tool to augment people, not to replace people as a decision-maker. Recognizing the critical importance of all security decisions, we do not use AI to make autonomous judgments. Instead, we strongly believe in the ‘human-in-the-loop’ approach. With AI’s support, our experts — as well as our customers’ experts — make more informed decisions, quicker.

This combination of human and AI strengths results in better protection and prevention and ensures correct responses to security incidents. Speed is of the essence in our domain. And by leveraging AI for rapid data analysis and threat identification, we can reduce the response time, thereby limiting an attacker’s window of opportunity.

Let’s look at the example of our Incident Conviction model, which is part of ION IQ. This model scores security incidents according to their likelihood of being a True Positive or a Benign Positive. We don’t use the model to automatically close incidents that are likely to be Benign Positive. Instead, we send the human analyst a ”likelihood score,” along with an explanation of the underlying factors that contributed to it.

Addressing Limitations in Generative AI

Applying the principles outlined above ensures that we use AI to enable our experts to provide a better service to our customers. Additionally, we consider risks and limitations with all of our design choices when implementing new skills for ION IQ.

In another series of blog posts, we will dive deeper into how we build AI features into ION IQ, and discuss how we address known limitations of LLMs:

  • How LLMs work, and how we ensure to produce specific answers that are relevant to our customers, rather than generic answers
  • How we use Prompt Engineering to make sure we get answers based on facts that are not biased by training data
  • How we use automated testing and measure the truthfulness to mitigate hallucinations
  • How we improve performance to limit response time, even when using complex sequences of LLM calls
  • How we deploy GPT in Azure Cognitive Services and ensure that no customer data flows into the model or is used to improve GPT to mitigate privacy risks

Watch for future technical deep dives into these topics.

Sharing
Article By

Theus Hossmann
Chief Technology Officer

Theus Hossmann is Chief Technology Officer for Ontinue. He is responsible for everything around data, data science and AI, and leads Ontinue’s team of expert data scientists and data engineers. Theus has published dozens of papers on applied AI and machine learning for top-tier conferences and journals such as ACM and IEEE. Theus earned his PhD in Applied Machine Learning from ETH Zürich, Switzerland.