Background gradient

It’s no secret that monorepos are great for developer velocity, which is why so many engineering organizations are moving to them. But when more developers are shipping code in the same repo, it’s easier to step on each other’s toes. In the best case, developers experience more merge conflicts and need to constantly rebase their changes. In the worst case, it means broken deployments and user-facing outages.

The solution is a merge queue, which merges your changes in a first-in-first-out order — resulting in more predictable releases, more reliable software, and happier developers. The problem we’ve heard from teams using them is that long CI times can hold up their queue, and this gets progressively worse as your team ships more pull requests. To avoid these traffic jams, you need to make sure your merge queue scales with the size of your team and the speed at which you ship.

In response to this feedback, we’re excited to dramatically speed up merges by bringing Parallel CI to the Graphite merge queue — giving development teams their time back without sacrificing reliability.

Parallel CI speeds up your merge queue by running your CI checks in parallel for multiple stacks at once (including individual PRs not part of a stack), without compromising on correctness. This is especially helpful if your repo sees a high volume of PRs, long CI times, or both.

For the last few months, some of the largest Graphite customers have been using the beta version of Parallel CI to merge thousands of pull requests. Thanks to their feedback, we’re able to open up this faster merge strategy to all Graphite merge queue customers - starting today.

Parallel CI increases the overall throughput of your code merge process (number of PRs merged per time period), which gives time back to individual developers.

A sample of Parallel CI early adopters have already seen 1.5x faster merges, which includes time spent running CI (33% decrease for p95, 26% for p75).

Orgs with a large number of stacked PRs are seeing up to 2.5x faster merges (60% decrease for p95, 34% decrease for p75).

When configuring Parallel CI, you can save developer time & CI runs by configuring both 1) where CI runs on stacked PRs and 2) how many PRs/stacks to process simultaneously.

The first setting lets you choose between correctness (safe revert-ability) and reducing the overall number of CI runs. Read more about these options in the docs.

The second setting (parallelism) will make your merge times approach your CI times, as CI should already be completed for the next stacks / PRs in the queue. Teams who stack more will see a greater reduction in their merge times from a higher parallelism setting. Internally, we use Parallel CI with a parallelism setting of 4, which halved our merge times. Our merges now take about 12 minutes, of which 10 minutes is CI.

Engineering organizations who previously built and managed their own internal merge queues ran into these same problems, and came up with two solutions:

  1. Parallel CI

  2. Batching

A dev tools team at Uber published a great paper on Parallel CI (which they refer to as “speculative execution”), which they implemented for their own internal merge queue. Parallel CI has the benefit of higher throughput while maintaining correctness (every merged PR must pass CI on its own), but it does not reduce CI costs.

The second merge queue optimization is Batching, which runs CI on the changes from multiple PRs all at once and merges them if they pass. Batching is also good for increasing throughput, and trades correctness for lower CI costs. Specifically, each PR by itself does not need to pass CI when merging into trunk, just the overall batch of PRs. While some teams need perfect revertibility, for many fast-moving engineering teams this is a worthwhile tradeoff.

Graphite’s merge queue supports both Parallel CI and batch merging, and soon you’ll be able to use both of these strategies together at the same time.

To learn more about configuring Parallel CI or how it works, check out the docs! The below example illustrates how Parallel CI works.

Suppose you’ve configured Graphite to run up to 3 parallel CI runs, and you have 5 unrelated PRs (or stacks of PRs) enqueued at a similar time: ABCD, and E.

  1. CI starts for A. In parallel, Graphite creates these temporary groupings and starts CI at the same time:

  • A ← B (i.e. B rebased on A), thereby testing this group of 2 PR’s at once

  • A ← B ← C, thereby testing this group of 3 PR’s at once

  1. Once A succeeds, it’s merged.

  • Graphite then starts CI for the grouping: B ← C ← D, thereby testing this group of 3 PR’s at once.

  1. Once B succeeds, the same process repeats: a group for C ← D ← E is created and CI runs.

  2. If at this point C fails, then:

  • C is evicted from the queue.

  • The runs for groups C ← D and C ← D ← E are both canceled.

5. D then becomes the first PR in the queue:

  • CI starts for D.

  • Graphite starts CI for the grouping: D ← E.

👉 To get started with the Graphite merge queue and Parallel CI, visit your Graphite settings or check out the docs.


Graphite
Git stacked on GitHub

Stacked pull requests are easier to read, easier to write, and easier to manage.
Teams that stack ship better software, faster.

Or install our CLI.
Product Screenshot 1
Product Screenshot 2