The Art of Destroying Software 🥋
The code you wrote today is probably wrong.
There’s a screaming truth that the software industry has been resisting with admirable consistency for decades: the code you wrote today is probably wrong.
Not because you’re a bad programmer. But because when you started, you couldn’t predict what you were going to discover along the way. The problem with software isn’t writing it — it’s that you generally have to write it before you fully understand the problem you’re solving. And by the time you finally do understand it, you’ve already got several layers of abstraction and interdependencies on top of it that make changing anything painful.
The industry’s usual solution to this is more abstraction. More layers. More design patterns with Latin names. An architecture so intricate and complex that debugging it feels like archaeology — digging up bones and trying to figure out what the original programmer had in mind.
But there’s a much simpler solution: what if we just accept that the code we write today is going to be wrong, and design our software to be easy to destroy and rewrite as many times as necessary?
The Talk Nobody Wants to Give
Greg Young has a talk called “The Art of Destroying Software.” It’s one of the most honest and courageous talks I’ve seen. In this article I’ll summarize the most important points based on my own experience and interpretation.
“The Art of Destroying Software”
Young opens the talk by asking how many people have attended a conference talk about deleting code. The silence in the room says it all.
There are thousands of talks about how to write it. How to refactor it. How to test it. About architectures guaranteed to outlive you. But about deleting it? Nothing. As if code were a sacred investment to be protected at all costs — not a hypothesis about a problem we don’t fully understand yet.
Young brings up the “Big Ball of Mud” paper and says something uncomfortable: that big ball of mud isn’t a design failure. It’s the natural state of software under real economic pressure. If your system isn’t a big ball of mud, maybe you just don’t have enough users for it to matter.
And the solution the paper proposes isn’t what you’d expect either — not “improve your architecture.” It’s “create small balls of mud inside the big one.” Manageable islands. Zones where you can walk in, burn everything down, and walk out in a reasonable amount of time.
That’s the principle: optimize for deletion.
The Metric Nobody Measures
Young proposes a rule that sounds simple and that almost nobody actually applies: no part of your system should take more than one week to rewrite from scratch.
Not as a sprint goal — as a design criterion.
If a component takes more than a week to rewrite, that component is a risk zone. Not because the code is bad, but because you no longer control it — it controls you. And when a new feature arrives that doesn’t fit the current model, you’ll spend two weeks fighting the model instead of two days writing a new one.
Young tells the story of working on Event Store DB, realizing his model didn’t support what he needed to add, and rewriting the entire backend in two days. Two days. His estimate: if he’d tried to adapt the existing code, it would have been two weeks of suffering to reach the same place.
The right question isn’t “how much did this cost to write?” It’s “how much would it cost to delete it and start over tomorrow?” If that answer scares you, you’ve already lost control of your system.
Micro Is Not a Size, It’s a Guarantee
When the industry adopted microservices, it took the word “micro” and, with its characteristic creativity, interpreted it as a team responsibility constraint. Each team has its service, each service has its repo, each repo has its pipeline. The carving criterion wasn’t the domain but the org chart: if there’s a “payments” team, there’s a payments service. If there’s a “users” team, there’s a users service. Conway’s Law dressed up as architecture. They debated at conferences, wrote books, drew diagrams with arrows connecting boxes named after teams. Mission accomplished.
That’s missing the point entirely.
“Micro” doesn’t describe size or complexity. It describes the guarantee: this service has one single reason to exist (single responsibility), and therefore one single reason to change. If the business changes tomorrow, only one specific part of the system needs to move. The rest stays intact. And if the part that moves turns out to be unrecognizable under the new requirements, you delete it and rewrite it. In a week.
A service with two responsibilities can’t be half-deleted. When you try to rewrite “the authentication part” you discover it’s tangled with “the user profile part” which shares data with “the notifications part.” What looked like minor surgery ends up being an organ transplant with the patient wide awake.
Single responsibility isn’t an aesthetic principle of clean code. It’s the necessary condition for deletion to be possible.
The Refucktor
One of the best parts of the talk is when Young dismantles refactoring as practiced by 90% of the industry.
His definition: a refactor is when you change either your code or your tests. One of the two. The one you don’t touch is your measurement — the evidence that you didn’t break anything.
What most people do: change code and tests at the same time. Young has a name for that: refucktor. It’s not a refactor — it’s a rewrite with the fear of admitting you’re rewriting.
And the best part: modern refactoring tools actively push you toward the refucktor. They offer automatic changes that touch code and tests in the same move. Convenient. And completely useless as a validation mechanism.
If you can delete the component and rewrite it in a week, the three-month “refactor” you were planning becomes a different problem entirely. You don’t need to refactor anymore. You rewrite — with the knowledge you didn’t have when you wrote it the first time.
Technical Debt and the Collective Hypocrisy
Young also has an unpopular stance on technical debt: it’s not inherently bad.
Say you have a mortgage. Is that bad debt? It let you buy a house you couldn’t pay for upfront. Technical debt works the same way: it gives you time-to-market in exchange for future work. The problem isn’t having it. The problem is not knowing how much you have, and not being able to pay it down because everything is so tangled that nobody wants to touch anything.
If a component can be rewritten in a week, the technical debt inside it stops being terrifying. Did you code fast and leave it messy? Fine. When it accumulates enough grime, delete it and start over with everything you’ve learned. No “technical debt reduction” sprint. No meeting with the CTO to explain why six months without new features is somehow a good idea.
You just delete. And rewrite better.
Knowledge Arrives at the End
There’s a question Young asks that should be tattooed on the forehead of everyone who has ever sat through a three-day architecture planning meeting at the start of a project:
When do you know the most about your system — at the beginning, when you’re planning it, or at the end, after you’ve built it?
At the end. Always at the end. After you’ve faced the real problems, after you’ve seen how the system behaves under load, after you’ve seen which parts are truly critical and which ones aren’t, after you’ve absorbed those dozens of requirement changes you couldn’t have predicted. That’s when you actually know what you’re doing.
You design the architecture when you know the least. You make the most important decisions when you have the least information. And then you spend years paying the cost of those decisions — not because they were stupid, but because they were inevitably premature.
The only reasonable response is to design knowing you’ll be wrong. Not to avoid the mistake — but to make the mistake cheap. So that when the moment comes that you truly understand the problem, you can throw away what you built before and do it right, without that being a six-month project.
Events Force You to Think
Event-driven architecture has a reputation as the solution to coupling between services. That’s true, but it’s the secondary benefit. The first one is less obvious and more important: events force you to have clear responsibilities.
When you design a direct call between services, you can be vague. “The orders service tells the inventory service to update the stock.” Nobody asks questions. The coupling hides inside the convenience.
When you design an event, you can’t be vague. You have to decide what happened from the perspective of the domain emitting it. And that decision forces you to understand exactly what that domain is responsible for, and where its responsibility ends.
order.confirmed doesn’t tell anyone what to do. It describes something that happened. The inventory service decides whether it cares. The notifications service decides whether it cares. The billing service decides whether it cares. The orders domain doesn’t know they exist, and it doesn’t need to.
The practical result: each service has exactly one reason to change. If inventory logic changes, only inventory changes. If notification logic changes, only notifications change. The orders domain never knows about any of that because it never knew those things existed.
That’s single responsibility for real. Not as a principle in a book, but as an inevitable consequence of the design. And a service with one reason to change is a service you can rewrite in a week without fear of what breaks on the other side — because on the other side there are only consumers of a fact that happened, not dependents of how you implemented it.
The Problem Isn’t Clean Architecture. It’s You. Well, No — It Actually Is Clean Architecture.
Let’s talk about the promise I’ve seen broken in production more times than any other.
Clean Architecture tells you: separate your business logic from infrastructure details. Put interfaces in between. That way, when you need to change the database someday, you only touch one layer. The rest stays intact. It sounds perfect. It sounds exactly like optimizing for change.
What actually happens: you implement a microservice with use cases, repositories, providers, datasources — each one hidden behind its corresponding interface. Four layers of abstraction to read a single row from a database. The day you decide to migrate from DynamoDB to MongoDB — for cost reasons, for features, for whatever — the promise was clear: touch only the datasource, implement the new interface, done.
And then you don’t. You end up touching everywhere. The models didn’t map the same way. The queries assumed DynamoDB semantics. The use cases had infrastructure details leaking upward without anyone noticing. The abstraction that was supposed to isolate changes had distributed them invisibly across the entire codebase.
The easy conclusion is “they implemented it wrong.” Probably true. But that’s exactly the trap: an architecture so complex that implementing it correctly requires a level of discipline and understanding that most teams don’t have isn’t a good architecture. It’s an architecture that always charges you the downsides and only delivers the benefits under ideal conditions that don’t exist.
An interface is a promise that more than one real implementation will exist. If today you only have one implementation, you don’t have an interface — you have bureaucracy with syntax. And that bureaucracy is what makes changing something simple require touching ten files across four different layers.
The question to ask before introducing any abstraction is brutal in its simplicity: do I already have two real implementations of this, or am I just anticipating a flexibility I might never need? If the answer is no, don’t abstract anything — when the time comes to change, delete and rewrite. That’s the whole point of optimizing for deletion.
Simple code you can rewrite in three days beats elegant code nobody fully understands and takes three weeks to modify. Every time.
Unix Knew. We Forgot.
Young closes with something I think is the most important point in the entire talk: this isn’t new.
Microservices, SOA, actors, objects — all the same concept recycled with different names every ten years. The Unix philosophy of the 70s already said it: small programs, doing one thing, composed together. Are you afraid to delete and rewrite grep from scratch? No. Because grep is small, does one thing, and you can understand it in an afternoon.
The problem isn’t that the industry doesn’t know this. The problem is that it knows, adopts the form — microservices, SOA, actors — and completely loses the point. The goal was never to have small services. The goal was to be able to delete them without fear. So that if a service can’t absorb a new requirement, you don’t spend weeks fighting what you have, but can rebuild it in a few days with a design that actually supports everything you now understand.
Microservices without deleteability isn’t an architecture. It’s a monolith with more servers.
Fear as a Metric
Young throws out a question almost in passing, without drama, that’s probably the most important one in the entire talk:
“Can you imagine working as a developer and not being afraid?”
Fear of your own code is so common it stopped looking like a problem. It’s just part of the job. “Be careful with that part.” “Don’t touch that, it’ll break everything.” “Better not change anything near there.” Phrases so normalized they’ve become invisible.
But they’re the most honest symptom of a system you can no longer destroy. Not the legacy code, not the technical debt, not the tests that fail in CI but “work fine in prod.” The fear. That’s the real indicator.
A system optimized for deletion doesn’t give you technical superpowers. It doesn’t make you smarter or more productive in any abstract sense. It simply takes away something you shouldn’t have had in the first place: the anxiety of touching your own work.
When you can delete any part and rewrite it in a week, the fear disappears. Not because the code is perfect, but because its imperfection stopped being permanent.
Optimize for deletion. Not because you’re going to delete everything tomorrow. But because the system you can destroy without fear is the only system you can change without fear. And a system you can change without fear is the only one that ages well.
Everything else is archaeology with Kubernetes on top.