By Rich Perkins, Principal Sales Engineer, Prophet Security

Your security spend has roughly doubled in six years. Your time-to-investigate and respond hasn't moved. Your CFO is asking why the security headcount keeps growing while the metrics that matter to the business don't.

The architecture under your SOC is the reason. Not your team. Not your tooling investment. Not your hiring funnel. The operating model your program inherited assumed human-driven alert triage at the volume the business was producing five years ago, and the business stopped producing alerts at that volume a long time ago.

This is a piece about why hiring more analysts won't close the gap, what changes when you fix the model instead, and the specific limitations and questions that should shape any AI SOC evaluation. It includes a four-question diagnostic you can run on your own program in the time it takes to finish a coffee.

The math the industry doesn't want to admit

Google Mandiant's recent M-Trends reporting puts global median dwell time at 14 days. The same report found that in 2025 the “hand-off” window between initial access and subsequent transfer to secondary threat group collapsed to just 22 seconds, a 95% drop from the 8 hours from 2022. Crowdstrike’s 2026 Global Threat report uncovered similar trends, with the average breakout time falling to 29 minutes, from initial access to exfiltration.

IBM's most recent Cost of a Data Breach research puts the average time to identify and contain a breach in 2025 at 241 days, with an average cost of $4.88 million. That’s a drop of 16% from 2020, when the time to identify and contain a breach stood at 281 days. Those numbers have not improved at the pace security spending would suggest, despite that spending having roughly doubled in five years, nor have they kept up with the shorter “breakout” or “hand-off” window

This isn't framed to scare defenders into chasing the hype. It's the operating reality. Money in, complexity in, but the curve from detection to investigation and containment barely moves.

SOC teams have already done the obvious efficiency moves. They tier severity. They auto-close known-benign alert classes. They suppress noisy detection rules. They tune. They route. That's not the problem.

The problem is that even after all of that work, the volume that lands on humans for actual investigation still exceeds what humans can investigate at the depth required. We’ve written an entire ebook on how the SOC queue is the breach, which you can download here.

In the deployments I've worked across, the post-tiering volume that hits human triage typically lands in the 120 to 150 alerts per day range. At 20 minutes per investigation including documentation, that's 40 to 50 analyst-hours daily. SOC teams of 5 to 10 analysts can cover the top of that range during business hours, leaving the rest of the queue for the shift, the day, or never.

That's the gap that doesn't close with more headcount. You can't hire enough analysts to investigate 100% of post-tiering volume at the depth the work requires. You can hire your way to better coverage at the margins. You cannot hire your way to the model change.

A diagnostic you can run on your own SOC

Before going further, run these four questions on your program. Honestly. The answers map your SOC capacity blind spots more reliably than any vendor pitch will.

1. What percentage of alerts above your defined investigation threshold did your team actually investigate last quarter? If less than 90%, you have a coverage gap that's hiding real risk. The gap exists because of how the work flows, not because anyone is dropping the ball. More headcount won't close it.

2. How many detection rules has your team suppressed in the last 12 months without an engineering ticket to replace the coverage? Suppressing noisy rules is healthy tuning. Suppressing them without follow-up engineering to replace what they were watching is debt. Each undocumented suppression is an attack surface you've stopped watching, and the threats those rules were designed to catch don't go away because you disabled them.

3. What was your senior analyst turnover last year, and how long did each replacement take to reach productive contribution? If turnover exceeds 15% or ramp exceeds 6 months, your bench is fragile. You're one resignation away from operational impact. Tribal knowledge walking out the door is a single point of failure most programs don't have a remediation plan for.

4. If alert volume doubled tomorrow, what's the first thing your team would stop doing? The honest answer is the part of your program that's already underwater. Whatever you'd cut first is what's currently holding on by a thread. That's where to focus the operating model conversation.

If three or more of these answers concern you, the productive conversation moves past hiring and into a different question: whether the architecture under your team can carry the program you actually want to run.

What changes when the model fixes

The teams making real progress aren't the ones hiring more analysts. They're the ones changing what work humans are required to do at all.

JB Poindexter & Co, an 8,500-employee diversified manufacturer, deployed Prophet AI in 2025. In the first 60 days, they ran 4,407 investigations through the platform with a mean time to investigate under 4 minutes.

That's 73 investigations per day at depth, against a Mandiant industry median dwell time measured in days. The deployment returned roughly 1,469 hours of analyst time to their team, equivalent to 6.3 analyst-years of investigation capacity at full annualization.

Their CISO, John Barrow, framed the outcome as "faster, more focused, and able to scale without adding immediate headcount."

The operating model shift in that sentence is what matters. Not "we hired more people." Not "we worked our existing people harder." The work no longer required the same number of people.

Cabinetworks ran 3,200 alerts through Prophet AI in 33 days. Six escalated to a human. The unexpected outcome was a 90% reduction in SIEM costs, primarily from no longer needing to ingest and store raw EDR and identity telemetry that had been pulled into the SIEM purely for analyst pivot queries.

When the AI handles those pivots directly against source systems, that ingest tier becomes optional. The line item that gets cut isn't the obvious one, and most teams don't model that secondary saving when they evaluate AI SOC tools. They should. For programs running enterprise SIEM contracts in the seven-figure range, the secondary savings often exceed the cost of the AI platform itself.

A second outcome worth noting: when the queue clears, teams stop having to ignore low and medium severity alerts. Most SOCs quietly stop investigating those classes under capacity pressure, even when their security leadership knows the coverage gap matters. A medium-severity alert isn't risky because it's medium.

It's risky because that's where real attackers hide while your team is buried in critical-severity noise. Bringing the medium and low tiers back into investigation scope is the coverage shift most teams want and very few can resource.

Every deployment requires two to four weeks of focused tuning before reaching steady state.

How CISOs are funding this

The piece a CISO is mentally writing while reading vendor content is the budget request. Where does this money come from?

Three patterns I've seen work, in order of CISO political difficulty.

Path one: Unapproved headcount budget. The cleanest funding path. The team has approved or pending headcount the program hasn't filled, and the AI platform replaces the need to hire that role. Fully loaded cost for a Tier 2 analyst typically runs $180K to $300K depending on market and seniority, which sets the floor for what the AI platform needs to displace to make the math work.

The JB Poindexter pattern fits here. The "scaling without adding immediate headcount"