Skip to content
Back to Resources

What the Microservices Era Should Have Taught Us About AI

How to avoid the key pitfalls from microservices in your AI adoption — and the lessons we learn on our way to maximizing gains from emerging tech.


Conference talks might not be the root of all evils in our industry, but they're certainly the root of many. Excited engineering leaders go to a conference, see a compelling talk, and return to the workplace ready to implement. The trouble: they're working from deck highlights, not from the long, often circuitous journey that produced those highlights.

The best example of this is microservices. Starting 15 years ago, microservices — on a rising tide of conference talks and best practices divorced from company contexts — came to be seen as universally good, and monoliths universally bad. To do anything but microservices became a worst practice.

The deeper problem was never microservices or monoliths. It was falling for hype.

With AI, we're in danger of repeating the same pattern. And the stakes are higher: going forward, the greatest value organizations will offer will come from their systems, so decisions that obscure or bottleneck value are harder to unwind than ever.

The pendulum is swinging back (again)

When microservices emerged, they felt like a release from everything that held us back in monolithic architectures. Was it surprising that so many companies jumped on the trend?

Fast forward 15 years, and the conference circuit has a different tenor. The talks hyping microservices have disappeared, replaced by those explaining the complexities, costs, and tradeoffs — and a few rediscovering the benefits of monoliths that many forgot.

This is the tech trend pendulum in action: encounter issues with one approach, find a new one, treat it as a solution rather than as an approach with its own tradeoffs. Blinded by optimism, we swing from trend to trend looking for a capital-S Solution.

Monoliths presented real problems — microservices wouldn't have become so popular otherwise. But this pendulum dynamic means we're often willing to overlook one set of tradeoffs in favor of another. Monoliths make certain things simpler and clearer. Various industries, use cases, and company maturity levels are genuinely better suited to them.

As Chris Richardson, author of Microservices Patterns, puts it: monoliths and microservices are both simply patterns.

EPwf0eZUYAAjSsR

Once you can zoom out and see both as patterns, some uncomfortable commonalities emerge. When monoliths dominated, everything was tightly coupled, incremental delivery was hard, and you had to carry enormous context to make decisions. Many organizations dealing with microservices are experiencing the same symptoms:

  • Teams shipping work that isn't independently valuable
  • Coordination costs rising as systems grow tightly coupled again
  • Value obscured to the point of limiting what AI can actually accomplish

Rationalization — not adoption velocity — is the critical bottleneck.

Irrational architectures and the revenge of Conway’s Law

When leaders complain about microservices, they tend to focus on two things: the sheer number of them and how difficult they are to coordinate.

The root problem is never quantity. It's the thinking behind the microservices: What's the use case? Are they appropriate for this context? How were they designed and deployed? How is the team structured?

This thinking is rarely done in advance, because the pendulum is swinging and the pain of monoliths is compelling. Teams implement microservices without rationalizing their architectures, accumulating a debt they'll pay back later.

Conway's Law compounds this debt. Systems mirror their communication structures. Team boundaries are often arbitrary, which means the services those teams own are similarly incoherent. Teams break services apart without considering value streams, creating new bottlenecks. Services multiply without any clear picture of which are necessary. Eventually, value streams are so fragmented that incremental work requires waiting on other teams, and coordination costs balloon.

conwayslaw

This slope is hard to reverse — in part because organizations rarely measure their microservices investments. Can you demonstrate success from your first few microservices to justify more? Can you demonstrate failure across too many and scale back? Most teams aren't prepared to do the unglamorous work of rationalization and consolidation.

Trace back far enough and you might find yourself implementing microservices on top of an architecture inherited from an old monolith. No wonder it disappoints. And no wonder it limits further transformation now that the trends are changing again.

Remember your history, or you'll repeat it

At AWS re:Invent 2025, AWS introduced Kiro, an AI IDE using spec-driven development to handle complex project requirements. In the same month it premiered, AWS suffered a 13-hour outage reportedly caused by Kiro autonomously deleting and recreating an environment — this at a company that had mandated Kiro adoption with an 80% weekly-use goal by year's end.

New technologies obscure old patterns. The pressure to adopt AI only demonstrates how badly these implementations can go off the rails.

The pattern is always trade-offs all the way down. Microservices and monoliths each have advantages and disadvantages. The better you understand your tradeoffs — and how they fit into longer historical trends — the better you can manage them.

The journey is the point, and it's the only thing AI can't accelerate

Microservices illustrate the pattern many organizations are already repeating with AI.

As technologists, excitement about new technologies is healthy. The trouble, especially when there's board pressure to get ahead of the next big thing, is confusing experimentation with improvisation.

For work to actually be experimental, you need to know:

  • What you're testing for
  • What success looks like
  • How to measure success and failure
  • How to build a feedback loop for continuous measurement

True experimentation requires hypotheses, measurement, and iteration — specifically in the context of your industry, company, and problems. When I consult with companies struggling with AI deployments, the answers to these questions are frequently no, no, and no.

The incentives point the wrong way. AI is exciting to deploy, and easy to get everyone behind. Rationalization is high-leverage, but harder to sell to a board. Politics inevitably enters, and engineering leaders tend to want to focus on the tech side rather than the people side. There's also an emotionally vulnerable dimension: leaders would need to acknowledge they have to unwind big prior investments. The instinct is to look forward, where things seem cleaner, rather than backward, where things are messy.

But the best outcomes only come from the journey. Christopher Sommers, former coach of the USA gymnastics team, put it this way in an email shared by Tim Ferris:

"If the commitment is to a long-term goal and not to a series of smaller intermediate goals, then only one decision needs to be made and adhered to. Clear, simple, straightforward. Much easier to maintain than having to make small decisions after small decisions to stay the course when dealing with each step along the way. This provides far too many opportunities to inadvertently drift from your chosen goal. The single decision is one of the most powerful tools in the toolbox."

The companies presenting their successes at conferences — whether in microservices or AI — didn't get there by watching talks. The talks are the output, not the input. The input is commitment to a long-term goal, pursued through smaller intermediate goals over time.

But the long-term goal can’t be AI, just as it couldn’t be microservices. Organizations need to decide to be learning organizations, to become bodies that evolve, introspect, adapt, experiment, and grow. It’s the one decision that can give you clarity in uncertain times. Build an identity instead of chasing down a destination, and you’ll always be on steady ground.

AI supports rationalization, and rationalization supports AI

None of this argues for treating AI like an unannounced guest and deep-cleaning the house before welcoming it inside. AI can accelerate organizational change and architectural rationalization, both of which contribute to ambitious goals.

Take building an agentic SDLC as an example. One clear goal behind that effort is increased productivity. You'd know something went wrong if productivity fell or rose only incrementally after building it. The temptation is to blame the adoption itself, but the issues almost always lie upstream.

If your teams and services can't operate autonomously now, agents won't either.

Without rationalized microservices, you likely have too many duplicative, similar-but-not-standard services, and just as many service owners unclear about their own service boundaries.

Task any one of these service owners with writing an agent for their service, and they’ll likely struggle. The agents that emerge from this fragmentation won’t be able to share skills, and they might not be cross-functional across services. And if your teams and services can’t operate autonomously now, agents certainly won’t be able to either. Complexity is baked into the architecture, resulting in ineffective agents.

Introducing agents this way is wasteful and risky. Lack of service standardization multiplies risk as different skills and agent variants proliferate across teams. It looks like a "move fast" moment in the early stages. What it's actually doing is multiplying complexity — and eventually that multiplication turns exponential.

AI can help here, though. Before AI, rationalization required intensive effort from architectural teams and lengthy interviews to map microservices to organizational structures. Now, we can point AI at codebases for faster, richer analyses — identifying duplicative services and rationalization opportunities at a pace that wasn't previously possible. AI removes the excuse many organizations used for avoiding architectural work.

Rationalization still requires human decision-making. But AI makes the initial, often most difficult parts significantly less painful.

The hard part comes after. Engineering leaders will need to acknowledge past overinvestment in microservices and past underinvestment in governance. They'll need to negotiate de-duplication and make some large architectural and organizational bets to build an agentic SDLC.

AI won't help with that. But AI at its fullest potential is on the other side.

Table of Contents

    Amy Carrillo Cotten is Director of Client Transformation at Uplevel. With 12+ years of technology industry experience as a change consultant and program manager, she works directly with engineering leaders and their teams to increase growth, reduce risk, and maximize innovation.

    stackup-graphic-CTA@2x

    Skip the demo. Get real answers on how to maximize AI impact.

    Take our 10-minute StackUp diagnostic first. Get benchmarked insights on your AI trajectory, then talk to us about the results if it makes sense.

    Related articles about AI engineering

    What Microservices Should Have Taught Us About AI
    AI Engineering

    What Microservices Should Have Taught Us About AI

    How to avoid the key pitfalls from microservices in your AI adoption — and the lessons we learn on our way to maximizing gains from emerging tech.

    The AI Transformation Stack: 10+ Tools Beyond Code Generation
    AI Engineering

    The AI Transformation Stack: 10+ Tools Beyond Code Generation

    Most AI coding initiatives fail to deliver ROI. Here are 12 tools that help engineering leaders transform throughout the SDLC.

    The CI/CD Gap Holding Back Your AI Investment
    AI Engineering

    The CI/CD Gap Holding Back Your AI Investment

    AI CI/CD bottlenecks prevent most orgs from capturing efficiency gains. Learn why deployment stays flat even as velocity doubles — and what to fix first.