Last week, Anthropic pushed an update to Claude Code that collapsed file operation outputs. Instead of showing which files the AI was reading, editing, or writing, users saw a cryptic summary: "Read 3 files (ctrl+o to expand)."

Within hours, the developer community erupted. GitHub issues piled up. Hacker News threads caught fire. And within days, Anthropic's head of Claude Code was personally responding to complaints, promising changes.

The lesson here isn't about a bad UI decision. It's about a fundamental tension that every founder building AI tools will face: the tradeoff between simplicity and transparency.

What Actually Happened

Claude Code version 2.1.20 changed how the tool displayed its actions. Previously, developers could see exactly which files Claude was accessing and how many lines it was reading or modifying. The new version collapsed all of this into a generic summary.

Boris Cherny, the creator and head of Claude Code at Anthropic, defended the change on GitHub. "This isn't a vibe coding feature," he wrote. "It's a way to simplify the UI so you can focus on what matters, diffs and bash/mcp outputs."

The response was brutal. Developers pointed out that seeing file names wasn't noise—it was the information they needed to catch mistakes before they spiraled. One user wrote: "It's not a nice simplification, it's an idiotic removal of valuable information."

Why Developers Were So Angry

The frustration ran deeper than a UI preference. Developers listed concrete reasons why file visibility mattered. Security was the obvious one—knowing what files an AI accesses is basic operational hygiene. But the more common complaint was economic.

When Claude Code starts reading the wrong files or pulling context from irrelevant parts of a codebase, it burns tokens. Lots of them. If developers can see that happening in real-time, they can interrupt and redirect. Hidden, those mistakes compound.

"I can't tell you how many times I benefited from seeing the files Claude was reading, to understand how I could interrupt and give it a little more context… saving thousands of tokens," one developer explained on Hacker News.

Another put it more bluntly: "Right now Claude cannot be trusted to get things right without constant oversight and frequent correction, often for just a single step. For people like me, this is make or break. If I cannot follow the reasoning, read the intent, or catch logic disconnects early, the session just burns through my token quota."

The Fix That Wasn't

Anthropic's initial response was to point developers to "verbose mode." The problem? Verbose mode dumps everything—full thinking processes, hook outputs, subagent outputs. It's not a middle ground; it's a firehose.

The company eventually repurposed the verbose setting to show file paths without the full diagnostic dump. It was a compromise, but not a reversal. The default behavior still hides file operations. Developers who want visibility have to configure their way out of the simplified experience.

Cherny's explanation on Hacker News revealed the reasoning: "Claude has gotten more intelligent, it runs for longer periods of time, and it is able to more agentically use more tools… The amount of output this generates can quickly become overwhelming in a terminal."

The Founder Lesson Here

This episode illustrates a classic product mistake: optimizing for the demo instead of the workflow.

Collapsed outputs look cleaner in screenshots and recordings. They make AI tools feel more magical—you give it a task, it handles everything behind the scenes, and results appear. That experience sells.

But professional users don't want magic. They want control. They want to understand what's happening so they can course-correct when things go wrong. And with AI tools, things go wrong constantly.

The irony is that Anthropic—a company built on AI safety and transparency as core values—shipped a feature that made their AI less transparent. The developers who noticed immediately were exactly the users Anthropic should care most about: the ones paying attention, using the tool seriously, and expecting to maintain oversight.

What This Means For Your AI Product

If you're building AI tools, this controversy should inform your design decisions. The urge to simplify is real and often correct. Users are overwhelmed by information, and hiding complexity can make products more approachable.

But there's a category of AI interaction where that logic breaks down. When AI takes actions with real consequences—modifying code, accessing data, making decisions—users need visibility. Not because they distrust AI, but because they know AI makes mistakes. And the cost of those mistakes grows when you can't catch them early.

The successful AI tools of the next few years will be the ones that figure out progressive disclosure: showing enough information by default that users maintain situational awareness, while keeping the interface from becoming overwhelming. That's harder than picking a side. But it's the right problem to solve.

Anthropic, to their credit, responded quickly and adjusted. But the fact that this design shipped at all suggests even the most safety-conscious AI companies can lose sight of user needs when chasing polish. It's a reminder that the people building AI tools should be using them seriously—and that feedback channels need to exist before updates ship, not just after.

The developers who complained weren't edge cases. They were the core audience. And they noticed immediately.