Background gradient
Data and numbers flowing from different sources: GitHub, a command line, and API documentation

Developers strive for efficiency. We want to ship high-quality code quickly, collaborate seamlessly with teammates, and create impactful features.

With a flood of notifications and constant context switching however, it can be challenging to zoom out and objectively assess the health and velocity of your projects. Without visibility, it's tough to optimize.

This is where tracking PR stats comes in handy.

Having the right PR metrics at your fingertips helps you spot pain points in your process and answer questions like:

  • Do certain pull requests take longer to merge than others?

  • Do changes get bounced back and forth endlessly before landing?

Once you identify the bottlenecks, you can correct and optimize workflows based on quantitative data. The result? Higher quality code, shipped faster.

While GitHub’s pull request workflow can be slower and make tracking insights difficult, don’t lose hope. There are many other ways to get detailed PR insights at your fingertips.

Let’s dive into what pull request metrics to track, how to get those PR insights, and how you can make the most of them for your workflow.

Before you begin measuring data, you need to know what metrics will make the most impact. After studying a variety of leader opinions across various platforms, we’ve identified a few PR metrics that people like to start with.

  • Time to first review: How long until someone looks at your new PR?

  • Publish to merge time: How long is your typical PR lifecycle from opening to merging?

  • Review cycles until merge: How many cycles of back-and-forth before landing code?

  • Throughput: Volume of PRs merged over time.

These help you pinpoint progress, participation, and bottlenecks over time.

Tracking pull request statistics on GitHub provides benefits beyond surface-level task tracking:

  • It offers deep insight into your team's work patterns.

  • It aids in identifying roadblocks early.

  • It assists with workload estimation by providing historical data.

Additionally, it fosters proactive discussions around code quality and team collaborations—two immensely valuable topics in any software development unit.

GitHub provides developers access to repository data through its REST API and GraphQL API. 

At first glance, these interfaces offer everything needed to pull raw PR data and calculate custom metrics. 

However, while the foundations are there, building a metrics pipeline on top of GitHub's API is an uphill battle. You need to invest development resources in building usable insights.

The main challenges include:

  • Scattered data: Relevant PR data lives across various endpoints like pull requests, issues comments, review comments, etc. This means making multiple calls and stitching together data.

  • No out-of-the-box metrics: While GitHub returns activity log data, it does not synthesize or calculate metrics like review cycles. The developer needs to craft business logic for the required metrics here.

  • Pagination complexity: API responses return paginated data sets. To retrieve comprehensive metrics, developers must follow the subsequent page links and aggregate all records.

  • Visualization effort: Raw JSON data isn't useful alone. Developers looking to spot trends must plot time series charts and create shareable reports.

  • Maintenance overhead: Any custom GitHub metrics script breaks easily with API changes and needs upkeep. As new use cases arise, holes in the dataset get uncovered.

Let’s look at the steps in building customized PR metrics using GitHub’s API. Do note that these are not full code snippets and are only meant as a direction to help set up your custom PR metrics. 

Begin by generating a personal access token to authenticate against the API:

Terminal
curl -u "username" -d '{"note":"metrics_token","scopes":["repo"]}' \
https://api.github.com/authorizations

Save the returned token value to use in subsequent requests.

The GitHub API has over 500 endpoints spanning REST, GraphQL, and other access methods. For metrics, potential options include:

Use curl commands to call endpoints and extract metrics-related data. For example, to get PR metadata:

Terminal
curl -H "Authorization: token <token>" https://api.github.com/repos/:owner/:repo/pulls

Transform the JSON output into formats suitable for calculations and visualization. We can easily do that with the Python json library.

Terminal
import json
with open('pull_requests.json') as f:
data = json.load(f)
pr_created_times = [pr['created_at'] for pr in data]

Once the data is stored in a usable data type, the onus falls on the developer to add custom calculations, data ranges, and reporting capabilities.

Terminal
from datetime import datetime
date1 = datetime.strptime(pr1_created, "%Y-%m-%dT%H:%M:%SZ")
date2 = datetime.strptime(pr2_created, "%Y-%m-%dT%H:%M:%SZ")
delta = date1 - date2
print(f"{delta.days} days") # lead time

Use graphing libraries to chart trends over time based on the data you pull from GitHub API.

Terminal
import matplotlib.pyplot as plt
weeks = []
pr_counts = []
# populate from data
plt.title("Weekly PRs")
plt.plot(weeks, pr_counts)
plt.savefig('pr_trends.png')

The above workflow requires significant effort. Some alternative options include:

While GitHub's API enables pulling raw activity data, all post-processing sits squarely on the developer. This requires significant initial investment, and ongoing maintenance is needed to fix broken endpoints, cover new use cases, and update visuals. 

There has to be an easier way to unlock PR insights.

Graphite Insights has out-of-the-box analytics available from the start.

Once the Graphite integration is set up on your repositories, metrics are automatically compiled in the background without extra work.

Some of the key metrics provided include:

  • PRs merged: Total pull requests merged over some time.

  • Publish to merge time: Median time from review request to response.

  • Wait time to first review: Median time PR is open until merged.

  • Review cycles until merge: Volume of pull requests merged per period.

These metrics are pre-aggregated at organization and individual levels, powered by Graphite's activity timeline. It tracks every event across PRs, including reviews, comments, state changes, etc.

To access the metrics, log into your Graphite account and navigate to the Insights tab. You can then:

Switch between preset time ranges like last week, month, or pick custom dates.

You can also filter by repositories and contributors.

Then “save” and share custom metric views.

You can then explore charts and numbers and so much more.

Under the hood, Graphite handles all the complexity of tracking activity, computing metrics, visualizing trends, and refreshing the reports automatically as new data comes in.

The benefits are instantly available and shareable metrics that provide enhanced visibility into developer productivity, collaboration efficiency, and project health—with no coding required.

Getting actionable insights from pull request data shouldn't be a heavy lift. While GitHub provides activity logs and comprehensive API access, transforming the data into meaningful metrics requires stitching data sources, writing scripts to process and visualize trends, and maintaining your custom solution.

Graphite does things differently.

Graphite's seamless GitHub integration and purpose-built analytics engine help your team skip straight to the metrics that matter—no duct taping. 

You get out-of-the-box visibility into developer productivity, project cadence, and collaboration friction that helps spot bottlenecks early. The shared, interactive reports empower data-informed decisions to keep teams running fast.

So why settle for makeshift metrics or flying blind? 

Graphite offers next-generation pull request analytics data that just works. This simplicity means more time building and less time reporting. 

So, when understanding what drives shipping velocity, Graphite delivers the insights that engineering teams need to make strategic decisions. 

The only question is, would you want to benefit from the increased developer productivity?