Blog

Ontinue’s Practical SOC Metrics Library: Measuring What Actually Matters

In security operations, you can’t improve what you don’t measure. Yet many security teams struggle with metrics that are either too abstract for decision-making or too tactical to demonstrate business value.

Ontinue has developed a comprehensive metrics library that bridges this gap. It provides security leaders with a practical framework to measure performance across the entire detection and response lifecycle—and, crucially, to quantify the impact of AI on SOC efficacy.

This is not about AI theater. These metrics aren’t designed to create impressive dashboards or justify technology purchases with vanity numbers. They’re built to measure real operational outcomes: faster detection, higher-quality investigations, reduced customer burden, and demonstrable risk reduction. Every metric ties back to one of four fundamentals of security operations:

  • Speed
  • Quality
  • Governance
  • Business impact

A Measurement Framework, Not Just Definitions

What makes Ontinue’s library uniquely practical is that it doesn’t just define what to measure—it explains how to measure it.

Each metric includes:

  • Specific calculation methods
  • The exact measurement points needed from existing systems

This transforms abstract concepts like Mean Time to Investigate into concrete data collection requirements (for example: alert_created_time, validated_time, and the timestamps in between).

The library distinguishes between:

  • Baseline measurement points – what traditional SOCs can track today
  • AI-instrumented measurement points – additional telemetry needed to understand AI’s contribution

This dual-track approach lets organizations measure AI impact through direct comparison, rather than assumption.

Built for Every Stakeholder

The Ontinue metrics library recognizes a fundamental truth: different audiences need different views of security performance.

  • Board members care about strategic outcomes and risk reduction.
  • CISOs need operational insights into service quality and continuous improvement.
  • Security managers require tactical metrics to optimize workflows and resource allocation.

The library addresses this by mapping 50+ metrics across three “altitude levels” and clearly identifying which metrics matter to which audience:

  1. Strategic
  2. Operational
  3. Tactical

This ensures boards aren’t drowning in tactical details while frontline managers get the operational visibility they need.

Cybersecurity Metrics Guide

CISO/Board Metrics (Security Scorecard)

Making AI Transparent and Trustworthy

Ontinue’s approach makes AI measurable, not magical. Every traditional metric has AI‑instrumented measurement points that track:

  • When AI contributed
  • How long it took
  • What confidence level it assigned
  • Whether human approval was required

This transparency builds trust.

  • A CISO can show the board that AI‑assisted cases resolve 40% faster while maintaining equivalent Investigation Quality Scores (measured via qc_sample_id and qc_checklist_scores with an AI‑assisted attribute).
  • A security manager can demonstrate that AI suggestions are accepted 85% of the time for enrichment but only 60% for disposition recommendations—actionable data for tuning confidence thresholds and approval workflows.
  • When quality control reveals AI Error Escape Rate trending up as confidence thresholds increase, teams gain the feedback loop needed to balance speed and safety.

A Framework for Operational Excellence

Ontinue’s metrics library represents a maturation of security operations measurement. It doesn’t just tell you to measure “AI impact”:

  • Exactly which timestamps to capture
  • Which flags to track
  • How to calculate meaningful comparisons

The framework explicitly rejects vanity metrics. It doesn’t measure “number of AI enrichments performed” or “percentage of alerts that touched AI,” because those numbers can look impressive while delivering zero operational value.

Instead, every metric ties to:

  • Speed
  • Quality
  • Governance
  • Business impact

For security leaders evaluating AI investments, this library provides the measurement blueprint to demand operational proof from vendors. For teams already using AI, it offers the instrumentation needed to identify:

  • What’s working
  • What needs tuning
  • Where human expertise remains essential

In an era where security teams face mounting pressure to do more with less, having the right metrics isn’t just helpful—it’s essential. And having honest, measurable metrics that reveal real value rather than activity is what separates genuine operational excellence from expensive theater.

Sharing
Keywords