Conference talks might not be the root of all evils in our industry, but they're certainly the root of many. Excited engineering leaders go to a conference, see a compelling talk, and return to the workplace ready to implement. The trouble: they're working from deck highlights, not from the long, often circuitous journey that produced those highlights.
The best example of this is microservices. Starting 15 years ago, microservices — on a rising tide of conference talks and best practices divorced from company contexts — came to be seen as universally good, and monoliths universally bad. To do anything but microservices became a worst practice.
The deeper problem was never microservices or monoliths. It was falling for hype.
With AI, we're in danger of repeating the same pattern. And the stakes are higher: going forward, the greatest value organizations will offer will come from their systems, so decisions that obscure or bottleneck value are harder to unwind than ever.
When microservices emerged, they felt like a release from everything that held us back in monolithic architectures. Was it surprising that so many companies jumped on the trend?
Fast forward 15 years, and the conference circuit has a different tenor. The talks hyping microservices have disappeared, replaced by those explaining the complexities, costs, and tradeoffs — and a few rediscovering the benefits of monoliths that many forgot.
Monoliths presented real problems — microservices wouldn't have become so popular otherwise. But this pendulum dynamic means we're often willing to overlook one set of tradeoffs in favor of another. Monoliths make certain things simpler and clearer. Various industries, use cases, and company maturity levels are genuinely better suited to them.
As Chris Richardson, author of Microservices Patterns, puts it: monoliths and microservices are both simply patterns.
Once you can zoom out and see both as patterns, some uncomfortable commonalities emerge. When monoliths dominated, everything was tightly coupled, incremental delivery was hard, and you had to carry enormous context to make decisions. Many organizations dealing with microservices are experiencing the same symptoms:
Rationalization — not adoption velocity — is the critical bottleneck.
When leaders complain about microservices, they tend to focus on two things: the sheer number of them and how difficult they are to coordinate.
The root problem is never quantity. It's the thinking behind the microservices: What's the use case? Are they appropriate for this context? How were they designed and deployed? How is the team structured?
This thinking is rarely done in advance, because the pendulum is swinging and the pain of monoliths is compelling. Teams implement microservices without rationalizing their architectures, accumulating a debt they'll pay back later.
Conway's Law compounds this debt. Systems mirror their communication structures. Team boundaries are often arbitrary, which means the services those teams own are similarly incoherent. Teams break services apart without considering value streams, creating new bottlenecks. Services multiply without any clear picture of which are necessary. Eventually, value streams are so fragmented that incremental work requires waiting on other teams, and coordination costs balloon.
This slope is hard to reverse — in part because organizations rarely measure their microservices investments. Can you demonstrate success from your first few microservices to justify more? Can you demonstrate failure across too many and scale back? Most teams aren't prepared to do the unglamorous work of rationalization and consolidation.
Trace back far enough and you might find yourself implementing microservices on top of an architecture inherited from an old monolith. No wonder it disappoints. And no wonder it limits further transformation now that the trends are changing again.
At AWS re:Invent 2025, AWS introduced Kiro, an AI IDE using spec-driven development to handle complex project requirements. In the same month it premiered, AWS suffered a 13-hour outage reportedly caused by Kiro autonomously deleting and recreating an environment — this at a company that had mandated Kiro adoption with an 80% weekly-use goal by year's end.
New technologies obscure old patterns. The pressure to adopt AI only demonstrates how badly these implementations can go off the rails.
The pattern is always trade-offs all the way down. Microservices and monoliths each have advantages and disadvantages. The better you understand your tradeoffs — and how they fit into longer historical trends — the better you can manage them.
Microservices illustrate the pattern many organizations are already repeating with AI.
As technologists, excitement about new technologies is healthy. The trouble, especially when there's board pressure to get ahead of the next big thing, is confusing experimentation with improvisation.
For work to actually be experimental, you need to know:
True experimentation requires hypotheses, measurement, and iteration — specifically in the context of your industry, company, and problems. When I consult with companies struggling with AI deployments, the answers to these questions are frequently no, no, and no.
The incentives point the wrong way. AI is exciting to deploy, and easy to get everyone behind. Rationalization is high-leverage, but harder to sell to a board. Politics inevitably enters, and engineering leaders tend to want to focus on the tech side rather than the people side. There's also an emotionally vulnerable dimension: leaders would need to acknowledge they have to unwind big prior investments. The instinct is to look forward, where things seem cleaner, rather than backward, where things are messy.
But the best outcomes only come from the journey. Christopher Sommers, former coach of the USA gymnastics team, put it this way in an email shared by Tim Ferris:
"If the commitment is to a long-term goal and not to a series of smaller intermediate goals, then only one decision needs to be made and adhered to. Clear, simple, straightforward. Much easier to maintain than having to make small decisions after small decisions to stay the course when dealing with each step along the way. This provides far too many opportunities to inadvertently drift from your chosen goal. The single decision is one of the most powerful tools in the toolbox."
The companies presenting their successes at conferences — whether in microservices or AI — didn't get there by watching talks. The talks are the output, not the input. The input is commitment to a long-term goal, pursued through smaller intermediate goals over time.
None of this argues for treating AI like an unannounced guest and deep-cleaning the house before welcoming it inside. AI can accelerate organizational change and architectural rationalization, both of which contribute to ambitious goals.
Take building an agentic SDLC as an example. One clear goal behind that effort is increased productivity. You'd know something went wrong if productivity fell or rose only incrementally after building it. The temptation is to blame the adoption itself, but the issues almost always lie upstream.
If your teams and services can't operate autonomously now, agents won't either.
Without rationalized microservices, you likely have too many duplicative, similar-but-not-standard services, and just as many service owners unclear about their own service boundaries.
Introducing agents this way is wasteful and risky. Lack of service standardization multiplies risk as different skills and agent variants proliferate across teams. It looks like a "move fast" moment in the early stages. What it's actually doing is multiplying complexity — and eventually that multiplication turns exponential.
AI can help here, though. Before AI, rationalization required intensive effort from architectural teams and lengthy interviews to map microservices to organizational structures. Now, we can point AI at codebases for faster, richer analyses — identifying duplicative services and rationalization opportunities at a pace that wasn't previously possible. AI removes the excuse many organizations used for avoiding architectural work.
Rationalization still requires human decision-making. But AI makes the initial, often most difficult parts significantly less painful.
The hard part comes after. Engineering leaders will need to acknowledge past overinvestment in microservices and past underinvestment in governance. They'll need to negotiate de-duplication and make some large architectural and organizational bets to build an agentic SDLC.
AI won't help with that. But AI at its fullest potential is on the other side.