Commits doubled. PR count is way up. Your developers are clearly using their AI licenses. So why are features still taking the same amount of time to ship?
This is the question we're hearing from engineering leaders who rolled out Copilot, Cursor, or other AI coding assistants 6-12 months ago. Initial gains were real — developers wrote code faster, felt more productive, started tackling tickets they'd been putting off. But somewhere around month 3 or 4, things started to stall.
The problem reveals itself in the data. AI amplifies whatever system you already have. For most organizations, that system wasn't built to handle the volume and pace AI tools enable.
“You go to SeaWorld, you throw a bunch of MacBooks into the dolphin exhibit and you're like, why aren't you doing more? That's really not far from what's happening, where you just throw tools at somebody and pray. Prayer is not a strategy.”
David de Regt, Technical Staff, Open AI; Former Engineering VP, Udemy
Individual Productivity Gains Don't Automatically Translate to Business Outcomes
Uplevel's data across customer organizations shows a consistent pattern. Initial velocity metrics look great. High AI users show significant increases in merge request velocity and issue velocity. This is the data that gets shown to executives as proof of ROI.
But when you look at what actually matters — deployment frequency and value delivered to customers — the correlation disappears.

The gap between developer activity and business outcomes reveals systemic constraints. Code is getting written faster, but reaching production at the same pace as before.
The cause becomes clear when you examine what happens to that code after it's written. PR complexity rockets up—in some cases doubling for high AI users. Review times increase proportionally. What developers gained in coding speed, they're losing in review cycles and integration complexity.

As one senior engineer told us: "Leadership expects more velocity. We have increased productivity, but not by 50–60%. It isn't possible because we have other delays in the release process."
The bottleneck moved from writing code to everything else in your delivery system.
CI/CD Tooling != Continuous Delivery
Executives often respond to this data with confusion: "We already invested in CI/CD 5 years ago. We have Jenkins/CircleCI/GitHub Actions. Why do we need to invest again?"
This reaction reveals a fundamental misunderstanding of what continuous delivery actually requires. Over the last decade, teams reduced CI/CD in their thinking to just pipelines. But continuous delivery is a discipline that encompasses how teams work, not just what tools they use.
According to Martin Fowler’s definition, true continuous delivery means:
- Software deployable to production at any time (can be, not necessarily is)
- Teams empowered to stop feature work to maintain deployability
- Fast automated feedback available to anyone at any time
- Push-button deployment
“It's not just using different tools or committing code more frequently. It's an entire mindset change. I've gone through it myself. I've seen other people's eyeballs finally light up when they got the new mindset. You're changing their entire workflow, how they do their work from beginning to end.”
Bryan Finster, Co-Author, minimumcd.org
When developers can generate more code faster, the integration bottleneck becomes critical. Teams still batching work into large PRs, reviews that take days, test suites that take hours, deployments risky enough to batch weekly — AI makes these constraints more painful, not less.
De Regt frames the core issue: "Nothing's going to make you faster until you remove those systemic roadblocks. Some things will make it more obvious where the roadblocks are, which is sometimes helpful, but you have to tackle the roadblocks."
Organizations thought they had continuous delivery because they had the tooling. AI exposes that the practices, empowerment, and workflow changes never happened.
Uplevel’s customer data shows this gap. According to Director of Client Transformation Amy Carrillo Cotten, "If you have a really strong CI/CD foundation, you are actually well positioned to have AI success and reap exponential gains. However, when organizations have incomplete or superficial or fragile CI/CD practices and haven't invested deeply in the capability beyond the technology platforms, those gains just aren’t there."

Rebuilding CI/CD for AI-Accelerated Development
The required changes are systematic, not superficial.
Use data to find and remove your actual roadblocks
Most organizations guess at what's slowing them down. AI tools make those guesses expensive.
Finster emphasizes the importance of evidence: "Use tools with data to tell you whether they're doing continuous delivery or not. Use real information. You don't just say, oh, we're doing CD — because I guarantee you I can spend 30 seconds with your team and find out if they are or not."
The data shows you where work actually gets stuck. Review bottlenecks? Deployment pipeline failures? Integration conflicts? You need to know which constraint to remove first.
Make doing the right thing easier than doing the wrong thing
Platform engineering reduces friction for the practices that enable continuous delivery.
Finster describes the goal: "Leverage platforms to help make it easy for developers to do the right thing, hard to let developers get the wrong thing done, and enable continuous delivery as the default."
When AI helps developers write code faster, the platforms need to make small PRs, frequent commits, and rapid integration the path of least resistance. If your tooling makes it easier to create massive feature branches than small changes, developers will create massive feature branches — even with AI helping them write code faster.
Adjust incentives, not just processes
Documentation and training don't change behavior if the incentive structure rewards something else.
Finster explains the mechanism: "If you know how to use the data and use it to adjust the incentives so that people no longer feel successful by not doing CD, you will get CD."
Teams measured on feature velocity but not on deployability will optimize for features. Teams rewarded for individual productivity metrics rather than system throughput will create work that looks productive but doesn't ship.
AI amplifies this problem. Developers can generate impressive-looking activity metrics while creating integration nightmares.
Bring in expertise from people who've actually done this
You can't learn continuous delivery from documentation while simultaneously trying to capture AI productivity gains.
You're not going to hand people tools and just have them magically learn it," de Regt says. "You have to see some examples of what good looks like to really understand how to build out these systems, and that requires somebody who's been there and can go help you through that path."
This requires leadership investment — not just budget, but genuine support for the transformation. As de Regt explains: "The underlying question to AI is, leadership is spending a bunch of money on something. How do they get the ROI on it? So much of that comes down to how do you build up the systems so that you can actually support these faster paced tools."
(We've actually done this.)
Uplevel gives enterprise engineering leaders and their organizations the visibility and capability to build flexible, high-functioning teams so you can maximize your investment in AI — and whatever comes next.
You Can't Fix What You Can't See
Before you can strengthen CI/CD practices, you need to understand where the actual constraints are in your system.
Key questions to answer:
- Where are PRs getting stuck? (Complexity? Review time? Test failures?)
- What's the actual deployment frequency by team?
- How long does it take from PR open to production for different types of changes?
- Are teams actually integrating continuously, or batching work despite having the tools?
The data shows you whether the constraint is technical (test suite speed, deployment pipeline reliability) or organizational (batch sizes, review processes, lack of empowerment).
Most organizations discover:
- Feature branches lasting days or weeks, despite believing they had trunk-based development
- Test suites that were "fast enough" before AI now create hours of feedback delay
- Review processes designed for careful scrutiny of hand-crafted code can't handle the volume AI enables
- Cultural norms that prioritize new features over system health
The Path Forward
AI coding tools are here to stay. They genuinely increase individual developer productivity. But individual productivity gains only translate to business outcomes when your delivery system can handle the increased throughput.
The organizations that will win with AI aren't the ones that rushed to give everyone licenses. They're the ones investing in the unglamorous work of strengthening their engineering foundations — continuous integration, automated testing, trunk-based development, small batch sizes, rapid feedback loops.
“You're not bringing in AI like it's Excel and we just need to teach people macros. If they're not already fundamentally doing continuous delivery as a workflow, you're changing their entire workflow.”
Bryan Finster, Co-Author, minimumcd.org
This requires investment — in tooling, practices, enablement, and organizational change. Unlike throwing more AI licenses at the problem, it actually removes the systemic constraints preventing you from capturing value.
Dive deeper
Watch the entire conversation with Bryan Finster, David de Regt, and Amy Carrillo Cotten.
