Despite implementing various measurement frameworks, many engineering leaders fall into the trap of measuring what's easy rather than what's meaningful.
Most measurement approaches treat engineering as a purely technical practice that can be optimized through technical metrics alone. But engineering organizations are sociotechnical systems, where human collaboration, communication patterns, and environmental factors are just as important as code deployment statistics.
Engineering organizations already have plenty of data — but they lack a cohesive framework to interpret that data and drive meaningful change.
Uplevel's WAVE Framework can transform your engineering metrics from mere measurements into actionable insights that drive real improvement.
What’s the problem with traditional engineering KPIs?
Many organizations collect metrics without a clear understanding of what they're trying to achieve. Traditional KPIs often create an illusion of control, failing engineering leaders in several critical ways:
-
Too much focus on individual output: Engineering leaders frequently track metrics like PR counts or story points completed. But these metrics are poor proxies for productivity and can lead to detrimental behaviors like artificially inflating PR sizes or submitting unnecessary code changes.
-
Overreliance on lagging metrics: Frameworks like DORA give you valuable insights, but these are backward-looking measurements. For engineering leaders under pressure to improve future performance, understanding that deployment frequency was low last quarter offers limited actionable guidance on what to change now.
-
Overlooking social dynamics: Research has demonstrated that team collaboration patterns are often stronger predictors of success than individual technical skills. Yet most organizations focus exclusively on technical metrics while neglecting team dynamics.
-
Little correlation between metrics and business value: Many organizations measure what's easy to track rather than what drives tangible business outcomes. As a result, they optimize for metrics that don't have a meaningful impact on the organization's success.
-
Limited ability to act on the data: Research by Dr. Nicole Forsgren (co-author of Accelerate) highlights that without contextual information about organizational structure, team interactions, and environmental factors, technical metrics alone are insufficient for diagnosing performance variations across teams.
These limitations leave engineering leaders with plenty of data but insufficient guidance on what to change.
Unlike frameworks that focus narrowly on deployment statistics (DORA) or that provide theoretical models without clear measurement approaches (SPACE), WAVE addresses the full spectrum of factors that influence engineering effectiveness. Most importantly, it recognizes that these factors are interconnected: improvements in one area cascade through the entire system.
WAVE is based on our data science findings and deep experience partnering with engineering leaders. Each category below offers a small group of dimensions and metrics that provide opportunities for actionable intervention. WAVE provides manageable clarity while still addressing the complexity of a sociotechnical system.
The WAVE Framework consists of four interconnected components:
-
Ways of Working (W): Measures how effectively teams collaborate, their overall health, and their ability to focus on deep work
-
Alignment (A): Captures how well engineering efforts align with business objectives through planning, resource allocation, and user feedback
-
Velocity (V): Tracks the flow of work through the system, including cycle times, throughput, and lead times
-
Environment Efficiency (E): Evaluates the quality, deployment process, and overall friction in the engineering environment
Instead of treating metrics in isolation, WAVE recognizes the interconnections between different aspects of engineering work. Engineering is not just coding — it's all your team's interactions with the product, users, and cross-functional teams.
The WAVE Framework creates a diagnostic map that helps engineering leaders understand the relationship between different dimensions of performance, enabling targeted improvements rather than isolated optimizations.
Let’s look at the specific engineering KPIs that WAVE measures.
1. Team health
Team health metrics provide a consolidated view of engineering teams' psychological safety, collaboration effectiveness, and overall engagement. This approach is grounded in Google's Project Aristotle research, which identified psychological safety as the most important factor in team effectiveness.
In software engineering specifically, a 2024 study in Empirical Software Engineering found that teams with established psychological safety were more invested in software quality, demonstrating "collective problem-solving, pooling their collective intellectual efforts and experience to tackle quality-related challenges."
When you track team health over time, you can identify early warning signs of burnout, disengagement, or collaboration challenges before they impact delivery.
2. Deep work
Deep work metrics track the average number of daily uninterrupted hours developers can dedicate to focused coding time. Cal Newport's Deep Work demonstrates the critical importance of uninterrupted focus for complex cognitive tasks like software development.
This concept is further supported by studies from the University of California, which found that after an interruption, it takes an average of 23 minutes for knowledge workers to return to their original task. For software engineers, context switching is particularly costly — frequent interruptions lead to increased defect rates and longer completion times for complex programming tasks.
3. Collaboration
Collaboration metrics assess how effectively teams share knowledge, provide feedback, and support each other's work. How teams collaborate can have an outsized impact on delivery: McKinsey describes a company that switched to cross-functional teams halfway through a project. Enabling “more rapid exchange of information, faster requirements clarifications, and speedier problem solving,” this change in ways of working resulted in a 45% decrease in code defects, less rework, and a 20% quicker time to market.
Microsoft's research on remote work during the pandemic also highlighted the critical importance of deliberate collaboration practices for maintaining engineering productivity.
4. Resource allocation
Resource allocation metrics track the actual distribution of engineering effort across new value creation, technical debt, and maintenance work. Unlike self-reported time allocations, data-driven measurements provide an objective view of where engineering time is actually spent.
In most organizations, developers believe they spend more time on new features than they actually do when their work is objectively analyzed. Our own research puts the average time spent on new value creation at just under 20% — one day out of five.
5. Planning Effectiveness
Planning effectiveness is a key enabler of value delivery in software engineering and product organizations because it reflects how well teams understand their work, capacity, and alignment with evolving priorities.
Metrics such as sprint completion rates (often referred to as the "say-do ratio") and requirements stability serve as proxies for this understanding. When teams consistently do what they say they’ll do, it suggests a healthy balance between ambition and realism. Likewise, stable requirements indicate clarity in what needs to be built — minimizing churn and rework that delay value delivery.
As always, however, context matters. These metrics should not be treated as success criteria on their own. A high sprint completion rate, for instance, could mask underlying issues if teams are playing it safe by undercommitting, or if they are delivering work that is no longer relevant due to shifting priorities. Instead, planning effectiveness is a signal to detect misalignments in team capacity, requirement clarity, or cross-functional communication. When planning metrics fluctuate significantly, it may indicate that teams lack the information or autonomy needed to make reliable commitments, which can delay or derail the delivery of customer value.
How Braze Sustains Continuous Value Delivery
"Leadership is making the invisible visible."

6. User feedback cycle
How quickly do teams receive and incorporate user feedback after releasing features? Short user feedback cycles are a leading indicator of software engineering alignment to value because they create a continuous loop of validation between what is being built and what users actually need.
When feedback is rapid and frequent, teams can quickly confirm whether their work delivers meaningful outcomes, enabling faster course corrections and reducing the risk of building features that customers don’t use. This responsiveness ensures that engineering efforts remain tightly coupled with business priorities, ultimately leading to higher-impact deliverables, better resource utilization, and increased customer satisfaction.
We find that it’s one of the most underrated metrics — if your team doesn't get feedback or gets it too late, information is probably getting locked between departments.
7. Cycle time and throughput
PR cycle time measures how long it takes for pull requests to move from creation to deployment, while PR throughput tracks the volume of completed work.
When evaluating metrics like cycle time and throughput, it's tempting to compare teams against each other. However, this approach often leads to misleading conclusions because teams operate under different contexts — varying codebases, workflows, review cultures, and priorities. These factors make true apples-to-apples comparisons across teams nearly impossible.
Instead, comparing a team’s current cycle time against its own historical performance offers a far better perspective. This approach allows leaders to:
-
Control for context: Each team’s structure, domain complexity, and work patterns remain relatively consistent over time, making internal trends more meaningful.
-
Identify real improvement: By comparing against its own baseline, a team can detect genuine progress or regression and understand the impact of process changes or tooling.
-
Encourage healthy behaviors: Cross-team comparisons can create unnecessary pressure to “compete” on metrics, potentially leading to gaming or unhealthy shortcuts. Longitudinal tracking fosters continuous improvement within the reality of each team’s workflow.
-
Make metrics actionable: Teams are more likely to trust and act on data that reflects their own experience rather than abstract benchmarks from other teams.
While some teams look at metrics like PR cycle time at the individual level, the most valuable insights come when these metrics are aggregated at the team level. This shifts the focus away from individual performance and toward systemic improvements that benefit the entire team’s velocity and collaboration.
.png)
8. Work in Progress (WIP)
Work-in-progress metrics track how many concurrent items a team works on at any time.
Organizations often hesitate to measure it because they want to create as much value as possible, but it’s important to teams because they can sense when they are getting swamped with concurrent demands. When leaders fail at capacity planning, it’s often because they don’t take into account the constraints of WIP. High WIP levels mean more context switching, which in turn leads to decreased quality and productivity.
9. Lead time
Epic lead time measures how long it takes to deliver meaningful business value from initial concept to production. Lead time is distinct from cycle time in that it captures the entire journey from concept to customer value, not just the development portion.
This metric helps engineering leaders identify systemic bottlenecks in the end-to-end value stream — which is important, as optimizing individual steps without addressing the entire flow often leads to local efficiencies but global ineffectiveness. Some hidden work categories that other metrics might miss include requirements clarification, cross-team dependencies, and approval workflows.
Engineering leaders should track epic lead time variability alongside mean performance to identify process instability and planning risks. High variability signals unpredictable delivery capability, making sprint commitments unreliable and resource allocation inefficient. This data enables targeted process improvements. It also helps engineering leaders remove organizational impediments that developers can't address on their own.
10. Quality metrics
Quality metrics include defects by type (bugs vs. escaped defects) and defects found by stage in the development process. It's more focused on outcomes rather than minute processes or output. For example, if your team's code has too many bugs or isn't working properly, you need to understand why that's happening in the first place. Is it because of high WIP, limited time, or too much context switching?
Detecting defects earlier in the development process reduces the cost of remediation by orders of magnitude — defects found in production can cost 100x more to fix than those found during code review.
11. DORA Metrics within a Broader Context
DORA metrics (deployment frequency, lead time for changes, change failure rate, and time to restore service) provide valuable deployment pipeline insights, but they're most valuable when contextualized within the broader WAVE framework.
The DORA research team's own annual State of DevOps reports consistently show that technical practices alone are insufficient for high performance. Their research identifies organizational factors, leadership, and culture as critical elements that enable technical excellence. The 2023 report specifically highlighted that elite performers excel not just in deployment metrics but in how they build healthy engineering cultures.
12. Friction and flow
Friction scores measure developer-reported friction in the development process, while flow efficiency calculates the ratio of waiting time to total cycle time. These metrics help identify organizational and process bottlenecks that slow delivery.
Friction is usually measured qualitatively through microsurveys. However, flow efficiency is measured by understanding the percentage of the cycle spent waiting. In knowledge work, including software development, items typically spend 70-85% of the time waiting rather than being actively worked on (a flow efficiency rate of 15%). Organizations applying Lean flow principles to software development have demonstrated significant improvements in both delivery speed and quality.
Implementing the WAVE framework doesn’t stop at collecting better metrics. The real change lies in how engineering organizations understand and improve. Sustainable transformation requires both measurement systems and enabling mechanisms for improvement.
“Having data helps the conversations I have with teams. 'You didn't work on these goals this quarter. Why was that? What can we do to increase the time you're delivering value?' Then we can take action. So that's a lot of the work we're doing with Uplevel.”
Francisco Trindade, VP Engineering at Braze
As you consider your own organization's effectiveness, ask yourself: Do you have visibility into all four WAVE dimensions? Can you identify which dimensions currently limit your performance? And most importantly, do you have a methodology to turn those insights into sustainable improvement?
As engineering systems grow more complex, the organizations that succeed recognize effectiveness as an ongoing practice — one that requires attention to the technical, social, and environmental realities of how teams work. When engineering leaders shift from isolated metrics to the integrated WAVE approach, they transform measurement from a reporting exercise into a powerful catalyst for sustainable improvement.
Ready to assess your team's engineering effectiveness?
Schedule a demo today and find out how leaders use Uplevel to engineer top engineering organizations.
