A developer discovered you can use Claude Opus 4.5 in GitHub Copilot for essentially free. Microsoft's response? They marked the bug report "not planned" and closed it.
This isn't just a funny glitch. It's a window into how AI companies are struggling with a problem that will define the next decade of software: usage-based pricing is really, really hard.
The Exploit
Here's how it works. GitHub Copilot charges based on which model starts a chat session. But when you use subagents—AI calling AI—those nested calls don't get billed separately. A developer going by "Angry-Orangutan" (apt name) discovered you could start a session with free GPT-5 Mini, then configure a subagent to use premium Claude Opus 4.5. Every subsequent call runs through the expensive model. Cost: zero.
There's a second method: start with a premium model but force it into continuous loops via tool calls. In testing, a single query ran for three hours, invoked hundreds of Opus 4.5 calls, and cost 3 credits instead of hundreds.
The developer reported this to Microsoft Security Response Center. They said billing bypass isn't their responsibility. A public bug report was closed as "not planned."
Why This Matters Beyond the Meme
On one level, this is hilarious. Microsoft, the company that bet billions on AI, can't secure their billing system. Classic enterprise comedy.
But there's a deeper lesson. AI pricing is genuinely hard because the usage patterns are fundamentally different from traditional software.
In SaaS, you charge for seats or features. Usage is bounded by human activity—there are only so many hours in a day. But AI systems can invoke other AI systems in loops, generating unbounded computational costs from a single user action.
Microsoft's billing system was apparently designed for the old world. It assumed a request is a request. It didn't account for recursive AI calls, agentic loops, or models invoking other models as tools.
The Pricing Problem Every AI Company Faces
If you're building AI products, you face the same challenge: how do you charge for something when a single user action can trigger wildly variable backend costs?
Consider the options:
Flat subscription: Simple for users, nightmare for you. One power user running agentic loops can destroy your unit economics. You end up rate-limiting, which kills the experience.
Per-query pricing: Fair but creates friction. Users hesitate before every invocation, reducing engagement and learning. Also, "per query" is ill-defined when queries can spawn subqueries.
Credit systems: The hybrid approach—give users a budget and let them allocate. GitHub Copilot does this. Problem: as this bug shows, credits can be gamed if your metering has holes.
Outcome-based pricing: Charge for value delivered, not computation used. Conceptually elegant but impossible to implement. How do you measure the value of an AI-written function?
Microsoft's Real Problem
The "not planned" response is revealing. Microsoft presumably knows about this bug now. They're choosing not to fix it. Why?
Likely because the fix is genuinely hard. Properly metering nested AI calls requires rearchitecting how Copilot tracks sessions. That's not a patch; it's a project. And the exploit probably isn't bleeding enough money (yet) to justify the engineering investment.
This is the unsexy reality of AI billing: the systems that seem simple to users are architecturally complex to implement correctly. Rate limiting is easy. Fair, gameable-proof metering is hard.
What Founders Should Learn
Design billing into your architecture from day one. Microsoft didn't, and now they're stuck with a system that leaks money. If your AI product has agents, subagents, tool calls, or any form of recursive invocation, your billing needs to account for it from the start.
Assume users will optimize around your pricing. Not maliciously—just rationally. If there's a way to get more value for less money, someone will find it. The Angry-Orangutan exploit is exactly what you'd expect from users given the system's design.
Plan for the long tail. Most users won't exploit your billing. But the ones who do will be disproportionately expensive. You need either technical controls or pricing structures that make exploitation uneconomical.
Watch how the big players solve this. Microsoft will eventually fix this. When they do, study the solution. They're facing the same problem you will at scale, with more resources to throw at it.
The Broader Point
We're in the messy middle of AI productization. The technology works, but the business models haven't caught up. Usage-based pricing is theoretically correct—you should pay for what you use—but practically difficult to implement.
The companies that figure out fair, ungameable AI pricing will have a genuine competitive advantage. Not because pricing is glamorous, but because it's the difference between a sustainable business and one that bleeds money to power users.
Microsoft's "not planned" is an admission: they don't have this solved. Neither does anyone else. If you're building in AI, this is your problem too.