Data report"State of code review 2024" is now liveRead the full report

Developer productivity metrics

Kenny DuMez
Kenny DuMez
Graphite software engineer

Developer productivity metrics can be useful in identifying bottlenecks and inefficiencies in engineering processes and practices. Rather than measuring an individual developer's productivity, these metrics provide insights into various aspects of the development process more holistically, from coding and collaboration to review and deployment. This guide explains more traditional developer productivity measurements, such as DORA and SPACE as well as several other metrics that can help in managing and improving software development workflows.

DORA (DevOps Research and Assessment) metrics are a set of four key performance indicators that help organizations understand the effectiveness of their DevOps practices. These metrics are:

  1. Deployment frequency: Measures how often an organization successfully releases to production. Higher frequencies typically indicate more mature DevOps practices.
  2. Lead time for changes: The amount of time it takes for a commit to get into production. Shorter lead times can indicate streamlined, efficient development and deployment processes.
  3. Change failure rate: The percentage of deployments causing a failure in production. Lower change failure rates suggest more reliable delivery processes.
  4. Time to restore service: The time it takes to recover from a failure in production. Faster recovery times are indicative of robust incident response and better overall resilience.

Organizations use DORA metrics to benchmark and improve their software delivery performance. These metrics encourage teams to deploy frequently, ship quality code, and recover quickly from production failures.

SPACE is a framework that encompasses a broader range of dimensions to assess developer productivity, beyond just the delivery process. SPACE is an acronym for:

  • Satisfaction and well-being: How happy and healthy are the developers in their work environment?
  • Performance: This involves traditional performance indicators such as code commits, pull requests, and others.
  • Activity: This includes metrics like code reviews, coding time, and other day-to-day activities.
  • Communication and collaboration: Measures the effectiveness of how developers communicate and collaborate within their teams and with other stakeholders.
  • Efficiency and flow: Looks at how efficiently teams are able to work without disruptions, focusing on flow states and the reduction of bottlenecks.

SPACE metrics provide a more holistic view of developer productivity, emphasizing not just output but also team dynamics, individual satisfaction, and operational efficiency. They help organizations identify areas where interventions can improve both the happiness of developers and the effectiveness of their work processes. These metrics support a more sustainable and humane approach to managing software development by emphasizing well-being and team collaboration as key drivers of productivity.

  • Measure of output: Total PRs merged is a straightforward metric that indicates the volume of work being completed. It helps in understanding the overall activity level of a team.
  • Project progress: High numbers of merged PRs may reflect periods of intense productivity and are often aligned with significant project milestones.
  • Individual contribution: This metric tracks the number of PRs an individual engineer merges, offering insights into personal productivity and workload management.
  • Balanced work distribution: Helps identify disparities in workload among team members, which can be crucial for ensuring a balanced distribution of tasks.
  • Collaborative engagement: Measures how engaged team members are in the peer review process, which is important for maintaining code quality and team cohesion.
  • Mentorship and knowledge sharing: High engagement in code reviews often correlates with better mentorship and knowledge dissemination within the team.
  • Efficiency in collaboration: Tracks the time it takes for PRs to receive initial feedback, which is crucial for maintaining momentum in the development cycle.
  • Indicator of blockages: Long response times indicate bottlenecks in the review process and may highlight communication or resource allocation issues.
  • Project flow efficiency: This metric measures the time from when a PR is opened until it receives its first review. Short wait times are typically indicative of an efficient review process.
  • Team responsiveness: Responsiveness reflects the team’s ability to quickly engage with new tasks, which can significantly impact project timelines.
  • Scope of changes: Provides insights into the complexity and size of changes. Smaller, more frequent PRs are generally easier to manage and review than larger ones.
  • Risk assessment: Large changes might carry higher risks of bugs and integration issues, requiring more thorough review and testing.
  • Cycle time: Measures the total time it takes for changes to go from being completed (code complete) to being merged into the main branch. This is a direct indicator of the speed of the development lifecycle.
  • Process efficiency: Helps identify delays in testing, review, or deployment processes that could be streamlined.
  • Quality control: Indicates the number of iterations a PR goes through before merging. Frequent cycles can suggest thorough quality checks or issues with initial submissions.
  • Feedback efficiency: Multiple cycles may also reflect the effectiveness of feedback and the adaptability of developers to integrate changes.

Understanding and monitoring these metrics can provide valuable insights into the health and efficiency of a software development team. They highlight not only the productivity of individual developers but also the collaborative dynamics and operational bottlenecks that can affect overall project timelines and outcomes. By analyzing these metrics, teams can make informed decisions about process improvements, training needs, and resource allocation to enhance both productivity and the quality of the final software products.

For more information on tracking and measuring these metrics see the Graphite Insights page.

Git gud
"It's the first Git workflow I've used that actually feels good."
–@robboclancy
Learn more

Graphite
Git stacked on GitHub

Stacked pull requests are easier to read, easier to write, and easier to manage.
Teams that stack ship better software, faster.

Or install our CLI.
Product Screenshot 1
Product Screenshot 2