Apple just patched a security vulnerability that has existed in iOS since version 1.0. For context, iOS 1.0 shipped with the original iPhone in 2007. This bug has been present in every iPhone ever made for nearly two decades—and someone found it before Apple did.
That someone wasn't a security researcher doing the right thing. It was commercial spyware companies selling exploit chains to government clients.
If you're building software, this is a masterclass in security debt and what happens when you don't pay it down.
What Got Patched
CVE-2026-20700 affects dyld, Apple's dynamic linker—the component that loads and links shared libraries when applications launch. In security researcher Brian Milbier's analogy, dyld is "the doorman for your phone." Every app must pass through it before running.
The vulnerability allows an attacker with memory write capability to execute arbitrary code. Combined with WebKit flaws also patched in iOS 26.3, attackers created what Milbier calls "a 'zero-click' or 'one-click' path to total control."
Apple's advisory is notably restrained: the flaw was exploited in "an extremely sophisticated attack against specific targeted individuals." Translation: commercial spyware companies have been using this for who knows how long.
The Commercial Spyware Ecosystem
This vulnerability didn't emerge from some basement hacker. Google's Threat Analysis Group discovered it while tracking commercial surveillance vendors—companies like NSO Group (Pegasus) and Intellexa (Predator) that develop and sell exploit chains to government clients.
These aren't theoretical threats. Pegasus has been used to target journalists, activists, and dissidents worldwide. Predator was deployed against Greek politicians. The commercial spyware market is estimated to be worth billions, with dozens of vendors competing to find and weaponize vulnerabilities exactly like CVE-2026-20700.
For nineteen years, these companies had a vulnerability to exploit in every iPhone ever manufactured. Some of them almost certainly found it before Apple did.
What This Teaches Founders About Security Debt
Ancient code is still running. Your startup's codebase includes libraries, frameworks, and dependencies that were written years ago. Some of that code contains vulnerabilities nobody has found yet. The longer code sits unaudited, the more likely sophisticated attackers have already discovered what you haven't.
Core components are the highest-value targets. Dyld isn't a flashy feature—it's plumbing. But because every application depends on it, a vulnerability there affects everything. Your authentication system, your payment processing, your data storage—whatever is foundational to your product is also your highest-risk attack surface.
The people who find bugs aren't always friendly. Apple has a world-class security team. They run bug bounty programs. They perform internal audits. And still, a nineteen-year-old vulnerability was discovered and weaponized by commercial spyware vendors before Apple found it. If Apple can miss something this fundamental for this long, so can you.
The Sophisticated Attacker Problem
Most security advice focuses on protecting against commodity threats—credential stuffing, phishing, script kiddies running automated scans. That's necessary but insufficient.
The commercial spyware industry employs some of the best security researchers in the world. They're well-funded, patient, and motivated by profits measured in the tens of millions per exploit chain. They're not looking for easy vulnerabilities—they're hunting for exactly the kind of deep, subtle bugs that internal security teams miss.
If your startup handles sensitive data, high-net-worth individuals, journalists, activists, or anyone a government might want to surveil, you're potentially in their threat model. Not because you're important, but because your users might be.
Practical Implications
Audit your ancient code. That utility function written in 2019? The authentication logic from your MVP? The payment integration you haven't touched in years? Schedule time to review it with fresh eyes, ideally someone who didn't write it originally.
Assume your dependencies have vulnerabilities. The iOS bug was in system-level code that application developers couldn't even access. Your dependencies—the frameworks, libraries, and tools you build on—contain bugs you can't find because you can't see the code. Keep them updated. Monitor security advisories. Have a plan for when critical patches drop.
Design for breach. If sophisticated attackers can sit on a vulnerability for years before anyone notices, assume they're already in your systems. Implement defense in depth. Encrypt data at rest. Limit blast radius through segmentation. Monitor for anomalous behavior even in production systems you consider "secure."
Consider your users' threat models. A bug in your B2B SaaS that serves small businesses is different from a bug in your app that serves journalists or activists. Know who your users are and what threats they face. Some users attract nation-state attention; design accordingly.
The Timeline That Should Scare You
Here's the timeline that matters:
2007: iOS 1.0 ships with dyld vulnerability
2007-2026: Vulnerability exists in every iPhone, iPad, and iPod Touch manufactured
Unknown date: Commercial spyware vendors discover and weaponize the vulnerability
Unknown duration: Targeted individuals are compromised using the exploit
2026: Google's TAG discovers the exploit in the wild
February 2026: Apple patches the vulnerability
The gap between "vulnerability exists" and "vulnerability discovered by defenders" was nineteen years. The gap between "sophisticated attackers find it" and "patch available" could have been years as well.
That's the security debt timeline. Not the theoretical "someone might find this eventually" timeline—the actual "someone did find this and weaponized it against real people" timeline.
The Bottom Line
Apple is one of the most security-conscious companies in tech. They have virtually unlimited resources for security investment. They still shipped a bug in 2007 that wasn't caught until 2026, after commercial spyware companies had already exploited it against targeted individuals.
Your startup has fewer resources, less expertise, and probably more technical debt per line of code. The vulnerabilities in your codebase aren't just theoretical—they're economic opportunities for attackers with more time and motivation than your security budget can match.
iOS 26.3 closes a door that was unlocked for over a decade. What doors are still open in your codebase?