
The Printing Press Moment: Coding Is Solved, Now What?
Boris Cherny built Claude Code. He hasn't written a line of code by hand since November 2025. He ships 10-30 PRs per day. If coding is solved, what's left? Expertise, speed, distribution, taste.
Boris Cherny built Claude Code. He leads the team at Anthropic that created the tool responsible for 4% of all GitHub commits (and growing to an estimated 20% by year-end).
He hasn't written a single line of code by hand since November 2025.
He ships 10-30 pull requests per day.
"At this point it's safe to say coding is largely solved."
That's not a hot take from a LinkedIn influencer. That's the person who built the tool, using the tool, at the company that made the tool. And the data backs him up: Anthropic's engineering productivity increased 200% since Claude Code's introduction.
So if coding is solved, what's left?
The Printing Press Analogy
Boris draws a parallel that's hard to ignore.
Before the printing press, less than 1% of the population could read. Scribes spent years learning the craft of copying text by hand. Literacy was a rare, valuable, gate-kept skill.
Gutenberg's press dropped the cost of producing text by 100x over 50 years. Within one generation, more material was printed than in the previous thousand years combined.
There's a surviving interview with a scribe from the 1400s. His reaction to the printing press: excitement. "Now I can focus on the art, not the copying."
He thought he'd be freed from tedious transcription to do the creative work. Instead, within a generation, the concept of "professional scribe" ceased to exist. Literacy spread from 1% to 70% over two centuries.
What's Actually Happening to Engineering
Boris ships 10-30 PRs per day. Not by typing faster. By delegating the entire act of coding to Claude Code.
His workflow:
- 100% of code written by Claude Code
- Always runs 5+ agents in parallel
- Starts ~80% of tasks in plan mode
- Auto-accepts edits after plan approval
- Uses Opus with maximum effort always enabled
This isn't "AI-assisted coding." This is coding delegated to AI with human review.
At Anthropic, everyone codes now. The PM codes. The engineering manager codes. The designer codes. The finance person codes. The data scientist codes. Not because they learned programming. Because the tool handles the programming.
The title "software engineer" is dissolving into "builder." Someone who knows what to build and can direct AI to build it.
The Five Principles
Boris distills what he's learned into five principles for building in this era:
1. Latent Demand
Build where people already are. Watch how they "abuse" your product for unintended uses. 40% of Facebook group posts were buy/sell before Facebook Marketplace existed. The demand was latent, not absent.
The modern version: watch what the MODEL is trying to do and make that easier.
2. The Bitter Lesson
Rich Sutton's principle: the more general model always outperforms the more specific one. Don't box the model in with strict workflows, fine-tuning, or orchestration scaffolding. Give it tools and a goal. Let it figure out the approach.
Scaffolding may improve results by 10-20%. But the next model wipes out that advantage. Build for the general case, not the specific hack.
3. Build for the Model Six Months Out
When Claude Code launched, it wrote ~20% of Boris's code. He bet it would reach 100%. It did.
Product-market fit will be uncomfortable for the first six months. The model isn't good enough yet. But when it catches up — and it always catches up faster than you expect — you hit the ground running while everyone else is still prototyping.
4. Underfund Things a Little
Put one engineer on a project. Not three. Not five. One person with intrinsic motivation plus AI ships faster than a coordinated team.
Some Anthropic engineers spend hundreds of thousands of dollars per month in API tokens. That's fine. Optimize cost after the idea is validated, not before.
5. Speed Over Everything
"If you can do something today, do it today."
Early Claude Code's only competitive advantage was speed. Boris would fix user feedback within minutes of receiving it. Fast feedback cycles encourage more feedback. Momentum compounds.
The 8-to-80 Framework
Marc Andreessen on the same podcast:
"The really great people are becoming spectacularly great. They're not twice as good — they're 10 times as good."
| Skill Level | With AI | Output |
|---|---|---|
| Zero | Goes from 0 to 1 | A basic app anyone could build |
| Average | Goes from 3 to 30 | Competent work |
| Good | Goes from 5 to 50 | Strong work |
| Great | Goes from 8 to 80 | Spectacularly great work |
The non-coder goes from 0 to 1 with Claude. Impressive? Yes. Valuable? Sometimes. Differentiated? No, because everyone can do it now.
The expert goes from 8 to 80. Same tool. Completely different ceiling. The tool multiplies what you already have. Zero times anything is still zero.
What's Left When Coding Is Solved
If coding is handled, what does the "builder" of 2026-2030 actually do?
Knows what to build. The hardest part was never writing the code. It was deciding what to write. Understanding the customer problem. Prioritizing which feature matters. Saying no to 90% of ideas.
Judges quality. AI generates code. Someone needs to decide if it's good code. Not syntactically correct — any linter handles that. Good in the architectural sense. Will this scale? Is this secure? Does this handle the edge case that will hit in production?
Understands the domain. The AI can build an IVF clinic management system. It cannot decide what features an IVF clinic actually needs. It cannot tell you that the nurse's workflow is different from the doctor's. It cannot know that trigger shot timing is measured in minutes, not hours.
Designs the experience. AI can generate a UI. It cannot decide whether the UI is right for the user. It cannot feel the frustration of a patient navigating a confusing booking flow. It cannot judge whether the notification at 3 AM is helpful or annoying.
Makes judgment calls under uncertainty. Should we build for 100 users or 10,000? Should we use this API or that one? Should we charge per seat or per outcome? These are decisions that require context, experience, and tolerance for ambiguity. AI can analyze options. It can't own the decision.
The Uncomfortable Implication
If coding is solved and everyone can build, then building is no longer differentiating.
What differentiates:
- Knowing the right thing to build (domain expertise)
- Building it before anyone else (speed)
- Building it for the right people (distribution)
- Judging whether it's actually good (taste)
Four words: expertise, speed, distribution, taste.
The printing press didn't make everyone a great writer. It made great writers wildly more impactful. The same writers who would have reached hundreds of readers now reached millions.
AI won't make everyone a great builder. It will make great builders wildly more impactful. The same builder who would have shipped one product now ships ten.
The question is not "will AI take my job?" The question is "am I the 8 that AI makes 80, or the 0 that AI makes 1?"
Built from Boris Cherny's interview on Lenny's Podcast (Feb 2026), Marc Andreessen's commentary, and Anthropic's internal productivity data.