Back to Resources

Strategic AI Adoption Starts with Engineering Fundamentals

  • 7 Minute Read

Table of Contents

    For the past two decades, the tech industry has been building a future that has since started to show its cracks: The cloud was for everyone, and every company should migrate and modernize or be left behind. Or so we thought.

    In hindsight, we can see an arc rising up and down – from disruption to consensus to skepticism and even regret. Now, cloud repatriation is a hot topic, and if we look at the shape of this arc, we can see a likely future for generative AI. 

    AWS — and cloud computing as we know it — started in 2006. By 2024, total cloud infrastructure service revenues across Google, AWS, Microsoft, and thousands of smaller companies, reached $330 billion

    But the future wasn’t for everyone:

    According to one study, “One-third of companies that have adopted the cloud say the shift brought about little to no improvement in their organizational effectiveness.” 

    cloud-repatriation

    In the cloud repatriation story, we see a foreshadowing of what might happen to generative AI. 

    The cloud movement proved that adopting new technologies well will be the true marker of lasting success, not jumping on the bandwagon as fast as possible. With that lesson in mind, engineering leaders today can resist the urge to jump into AI without looking and instead create a thoughtful adoption strategy that begins with building foundational engineering skills while gradually integrating AI tools.

    AI is an accelerant, not a pilot

    Think of AI, particularly AI coding assistants, like accelerants. They can help you and your developers build faster, but they don’t help you figure out what to build, who to build it for, and how to align your team’s work with the company’s goals and your users’ results. You can use AI to move faster, in other words, but AI can’t guarantee you’ll be moving in the right direction. 

    For example:

    • Getting to a severe incident faster because you produced more code than could be tested or reviewed is not winning.

    • Lowering your customer satisfaction score because you produced bad code at a fast pace is not winning.

    • Increasing the amount of code produced but increasing the rate of bugs produced at the same pace is not winning.

    Adopting AI tools can be like dropping an F1 engine into a Toyota Camry. You might go faster, but you don’t suddenly have a sports car, and you aren’t going to start winning races. If anything, you can just end up losing faster. 

    camry

    On a small scale, this acceleration can facilitate developers working on projects misaligned with organizational goals or moving ahead with projects despite architecture dependencies, collaboration challenges, and information silos remaining unresolved. The sheer progress can feel impressive, but the ability to plow ahead can result in net inefficiency if developers have to rework or restart.

    On a large scale, this acceleration can allow apparent productivity to hide fundamental flaws in your engineering practices. Before AI, if you had a talented team that wasn’t delivering, you could safely assume that the challenges were systemic and start evaluating your practices, workflows, and systems. With the pace of development that AI enables, you might not be cued to see those issues as quickly (or at all). 

    For example, junior developers frequently interrupting senior developers to get context on their work creates friction, but it also cues your team to build better ways to supply the whole team with context. If those junior developers are using AI, they might need to ask fewer questions, which results in a faster pace but hides deeper issues in understanding the product and its users.

    Ultimately, if the development team lacks direction or isn’t aligned with product or company goals, it can accelerate in the wrong direction or drift off course entirely. 

    Output isn’t enough to ensure outcome

    Organizations need to draw a careful distinction between output and outcome. 

    When your developers close their IDEs at the end of the day, the lines of code produced, the commits, the PRs – that’s all output. When your leaders look at business value – customer satisfaction, revenue, user growth – that’s outcome. 

    It’s the job of engineering leaders not only to increase output but to ensure output leads to outcome as directly, efficiently, and predictably as possible. 

    There’s a lot of research demonstrating the potential of increasing sheer output with AI. So far, however, there’s little research to show how AI helps with the quality of engineering and business outcomes. Despite that lack of research, organizations need to position AI adoption to support both output and outcome, and ultimately, output needs to be subordinate to outcome. 

    AI increases output

    AI-outcomeGenerally speaking, current research shows that AI coding assistants cause an increase in productivity.

    One study, for example, found that developers using GitHub Copilot were 55.8% faster than developers without access to Copilot. The task in question required implementing an HTTP server in JavaScript as quickly as possible.

    GitHub research, which focused on surveying developers, found that “Copilot helped them stay in the flow (73%) and preserve mental effort during repetitive tasks (87%).” As we’ve covered before, qualitative data has many limitations, but it’s still generally indicative of productivity improvements. 

    More studies are showing the same basic directionality every day, but there’s a big caveat: Developer productivity is a thorny, complex issue, and there are many ways to measure it. Similarly, actual productivity and sheer speed are not the same, and the research so far (with the exception of the GitHub study above) primarily demonstrates the ability to increase speed.

    Output is not the same thing as outcome, and so far, the research only shows that AI improves output.

    As of now, we don’t know and can’t predict how AI will affect outcome. Ultimately, if your problem is anything other than speed, AI will likely not help you.

    Fundamentals improve outcomes

    The fundamentals often have the greatest impact on engineering outcomes, as demonstrated by years of research and the qualitative evolution of the industry itself. 

    Here, we’re talking about the big stuff: Engineering practices, workflows, systems, and philosophies that drive business value and holistic measures of developer productivity and value streams that actually capture the meaning behind the work. 

    In Accelerate: The Science of Lean Software and DevOps, Nicole Forsgren, Jez Humble, and Gene Kim found that the difference between high-performing organizations adhering to DevOps practices and low-performing organizations was stark.

    High performers, in contrast to low performers, have:

    • 46x more frequent code deployments

    • 440x faster lead time from commit to deploy

    • 170x faster mean time to recover from downtime

    • 5x lower change failure rate

    Writing “unplanned work and rework are useful proxies for quality because they represent a failure to build quality into our products," the authors also found that: 

    High performers spent 49% of their time on new work and 21% on unplanned work or rework
    Low performers spent 38% on new work and 27% on unplanned work or rework

    The fundamentals are often easy to underrate – until circumstances change. Take an engineer out of an organization with a good grip on the fundamentals, drop them into one that hasn’t modernized, and watch them get a new appreciation for what they might have taken for granted. Similarly, take an engineer out of a low-performing organization, drop them into a better one, and watch them do the best work of their career. 

    The point: All our best research shows that DevOps increases productivity and quality outcomes. However, the DevOps implementations that yielded the results in Accelerate worked because the high-performing organizations took DevOps as a culture, process, and systemic change from the ground up and iterated from a holistic perspective

    The companies that realized the value of their DevOps transformation were the ones who didn’t just automate pipelines. If AI is a short-term purchase with a quick payoff (at best), effective DevOps is a long-term investment with deep, lasting returns.

    Adopting AI is high potential and high-risk

    None of this is to say organizations shouldn’t adopt AI. Organizations just need to know the risks alongside the potential of AI adoption. 

    Take DevOps, for example. Adding AI to your organization could be great, but AI can't create or fix a poor DevOps approach. 

    GitHub research shows that 27% of developers spend most of their day waiting for builds and tests from other teams outside of CI, and 26% spend most of their day deploying code to production—indicating that many organizations still struggle to maintain an effective DevOps approach. Along similar lines, ask yourself these questions:

    • Are your developers in meetings all the time?

    • Does your team not yet have the skill set to do DevOps?

    • Is the technical infrastructure to support DevOps not ready yet?

    • Is your engineering org unable to help people get work done?

    Many teams would answer yes to some or all of the above questions, and many would struggle with waiting for builds and tests. If you have these problems, AI can actually make them all worse.

    This all comes down to a simple formulation: AI is better for output, and the fundamentals are better for outcome. 

    AI adoption outcome matrix

    In an ideal world, you have the fundamentals down, and you can layer AI on top to increase an output that probably leads to great outcomes. In a still-good world, you have the fundamentals down but haven’t yet managed to integrate AI in a way that works for you and your goals. Despite the hype, that’s okay – AI products will keep maturing, and you can either work toward adopting them or wait for the right product or approach. 

    Either way, this result is better than the situation many organizations are jumping into: having only a loose grasp of the fundamentals but using AI to speed up the pace with which they build shoddy, misaligned work. 

    Build the muscle memory for change

    Think of change not as a discrete moment or event, but as a muscle that organizations have to train. 

    The stronger the muscle, the more resilient the organization will be when new technologies emerge. The more change becomes a way of doing business and a way of working, the better the muscle memory, which makes each subsequent change easier.

    To fully take advantage of AI today, ongoing incremental changes in AI tomorrow, and future revolutions beyond AI, organizations must learn to change technologies and organizations at scale

    In the end, AI is a paradigm change, but that doesn’t mean it will sweep away everything we know. Uplevel helps organizations do the things that never go out of style.

    wave_frameworkWe believe the foundational elements of your organization’s engineering practices are foundational to AI adoption, too. The WAVE Framework for Change focuses on four elements: Ways of Working, Alignment, Velocity, and Environment Efficiency. 

    With a holistic approach, you can ensure you’re ready to adopt AI and make the fullest use of it. The Uplevel Method, using the Uplevel insights platform, can help you drive the engineering transformation necessary to adopt AI and use it well. 

    Like with the cloud, the true test is in your AI adoption strategy, not how fast you adopt it. You don’t want to be the ones looking back in a decade and planning repatriation from AI.