The Price of Feeling Capable
If there’s one thing that AI products do really well, it’s their ability to make you feel like you can do more.
The current discourse on AI usually centers around productivity, automation, displacement, reasoning, safety, and the cost of producing work. But one of the most important things that this technology changes is the emotional threshold required to start something.
A person who does not know how to code can now make something that resembles software. It’s a step up from the WYSIWYG interface; at first, if you didn’t know how to code, you could drag and drop blocks around on a screen and rely on product infrastructure to turn what you see into something real. And now, the hottest new programming language is good ol’ English. A founder can sit alone with a laptop and feel, for a while, like a small team. My mom can type “Make me a new website for my consulting business” and the result will actually look pretty good. This is not inherently bad. A lot of people were kept from building not because they lacked ideas (“I can handle the business side”), curiosity, or taste, but because the distance between imagination and execution was too large. AI collapses that distance. It gives people permission to try and the resources to succeed at things they might not have attempted before.
But confidence is becoming a commodity. And commodities can be priced out.
The cost of building software feels like it is going to zero. I think that framing gets the next phase of AI wrong. The current moment feels cheap because we are still living inside a subsidy regime. Frontier models are expensive to train, expensive to serve, and expensive to improve, but the user-facing price of their intelligence is being held far below what the capability is worth. Millions of people are accessing systems that compress enormous amounts of technical and operational knowledge for a fraction of the cost.
The instability of that price became visible a few weeks ago. On April 21, Anthropic quietly updated its pricing page to remove Claude Code (its agentic coding tool) from the $20 Pro plan, restricting it to Max tiers starting at $100. There was no announcement or changelog entry. Developers caught it by diffing archived versions of the documentation. The company reversed the change within a day after public backlash, but its head of growth clarified that a 2% experimental cohort of new signups was still affected, and that further pricing changes for existing subscribers would now come with advance email notice. The implication was that further changes were coming. Current plans, he said, “weren’t built for this.”
Even at $100, I still think this is an extraordinary deal for what a frontier coding agent can do. What surprised me was the speed and silence with which a 5x floor could appear, and the way the public reacted. Simon Willison, who has spent over a hundred posts teaching people how to use Claude Code, immediately worried about the journalists and students he had been recommending it to. CC had become part of his pedagogy, and when the price increases by 5x, suddenly a non-zero cohort of people are going to be priced out. This looks to me like a preview of how the “confidence subsidy” ends. People get used to a level of help that makes them feel newly capable, and then the best version of that help starts getting rationed, metered, or moved into higher tiers.
Access to frontier intelligence has changed what people believe they are capable of. Because with it at our fingertips, we really can do more. But over time, we begin to incorporate rented intelligence into our sense of self. At first, paying for access feels optional, almost like a boost. Then it becomes part of how the work gets done. Eventually, losing access to frontier models, or resources like Codex/Claude Cowork/ChatGPT, would mean shrinking back to a version of yourself that now feels less capable.
The obvious objection here is that all human capability is mediated by tools. Nobody looks at a skyscraper and says the builders were frauds because they used cranes, CAD software, or prefab materials. But AI is strange in a way cranes are not. It can extend execution while also simulating the judgment that would normally tell you whether the execution is any good. A crane does not persuade you that you understand load-bearing structures. A frontier model can produce an architecture, a contract, a security policy, or a migration plan in a way that feels like understanding has “entered the room”, even when the person accepting the output cannot evaluate the assumptions underneath it.
Over time, vigilance starts to feel like friction. AI does not usually ask you to stop and understand something. It simply makes understanding feel optional. The effort is there, the tests pass, the pages load. At that point, continuing to investigate feels like slowing yourself down. This is easy to justify once, and easy to repeat. The model does not have to make you ignorant. It only has to make ignorance productive. It is easy to say that people should stay in the loop, review everything, audit every output. But the whole point of these systems is that they reduce the cost of producing work. The more carefully you supervise them, the more you give back some of that advantage. The temptation is built into the product experience.
Let’s revisit the idea that the current pricing of intelligence is unlikely to last.
The familiar analogy here is the ZIRP era in tech. ZIRP, or “zero interest rate policy,” refers to the long period when money was unusually cheap to borrow and investors were pushed toward riskier, higher-growth assets. In tech, that meant companies could raise money, hire aggressively, and defer hard questions about profitability for much longer than they otherwise could. It became difficult to distinguish durable businesses from businesses that were simply downstream of cheap capital.
The shared observation between ZIRP and the token-subsidized AI era is that cheap inputs change behavior before anyone admits they are cheap inputs. During ZIRP, cheap capital made certain company-building habits feel natural: hiring ahead of revenue, growing before proving unit economics, treating burn as momentum. In AI, cheap or bundled frontier access may be doing the same thing to work itself. It encourages people to build habits around a level of cognitive leverage that may not remain equally available.
But what can you do? The healthiest relationship to AI, in my mind, is not to refuse the subsidy. That would be unrealistic, probably wasteful, and arguably ungrateful for an unusual moment. The better response is to treat subsidized intelligence as an opportunity to convert access into understanding while the conversion is still cheap.
When the model picks an architecture, the cost of asking why is small, and “industry standard” is not a real answer. When it writes code you would not have written, the cost of running it once with the explanation visible is small. When it makes a decision you would not have known how to make, the useful question is not whether the decision was right (honestly, the model is usually right) but whether you would have caught it if it were wrong.
This probably feels obvious, but prompting to learn and prompting to bypass feel different. The first leaves you with a question you did not have before. The second leaves you with an artifact and a small flicker of relief that the work is done. Over weeks, the ratio between those two modes is roughly the ratio between capability you are internalizing and capability you are renting.
None of this is about working slower for its own sake. Everyone should feel entitled to building a workflow that’s representative of their own craft. It’s important, now more than ever, to be honest with yourself about which parts of your judgment are getting stronger and which parts are simply getting bypassed or starting to atrophy. When confidence becomes a commodity, the question is not how many people can afford it during the subsidy.
The question is what remains when the price goes up.
As part of this post, I wanted to include a prompt that you can add to your local agent system prompts that can encourage learning as you use these tools. Copy this into CLAUDE.md, ChatGPT’s system prompt, and other surfaces where you leverage AI in your daily workflow. It’s not perfect, but it’s been helpful for me!
## Make me smarter as we work
Your job is not only to complete the task, but to help me understand the work well enough that I am more capable afterward.
Use the spirit of Feynman's line, "What I cannot create, I do not understand." Do not just hand me finished answers. Help me understand the pieces well enough that I could recreate, adapt, or challenge them myself.
When you make a meaningful decision, briefly explain the reason for it. Do not narrate every obvious step, but do call out the choices that affect quality, accuracy, strategy, clarity, risk, or long-term usefulness.
Prefer explanations that build my mental model:
- Explain why this approach fits the situation.
- Name the tradeoff you are making.
- Point out the assumption you are relying on.
- Tell me what would make you choose differently.
- If you use a framework, method, source, or specialized concept, explain the relevant idea in one or two sentences.
- If you correct a mistake, explain the actual cause, not just the fix.
- If you are unsure, say what you are unsure about and how you are reducing that uncertainty.
Do not turn every response into a tutorial. Keep momentum. But when the work involves judgment, take me along for the ride.
After substantial work, include a short "What changed / Why it works / What to watch" summary. The goal is that I can review, maintain, adapt, and extend the work without treating your output as magic.
If I ask you to move fast, you can be concise, but do not silently skip important reasoning. If I ask you to teach, slow down and make the reasoning explicit.