Read Anthropic’s case study about Graphite Reviewer

The code review process

Greg Foster
Greg Foster
Graphite software engineer

The code review process is a fundamental aspect of software development that ensures the maintainability, functionality, and quality of the code before it becomes part of the main project repository. This subpage provides a detailed breakdown of the code review process, drawing on best practices and insights from industry studies, such as the SmartBear study and principles adopted by successful software development teams.

The process typically starts with an engineer preparing the code for review after ensuring it is built and tested. Once the code is ready, the engineer selects relevant reviewers based on their expertise related to the changes made. These reviewers are notified and tasked with inspecting the code.

Example: An engineer has completed adding a new feature to the application that allows users to reset their passwords via email. Before initiating the code review, they ensure all new code is well-tested with unit tests covering the new functionality. They then choose two team members who have previously worked on the authentication system to review the changes due to their relevant experience.

Reviewers examine the code for defects, readability, and maintainability. They may use a checklist to guide the review process, ensuring consistency and thoroughness. It's recommended not to review code for longer than 60 minutes at a time to maintain a high level of performance and attention to detail.

Example: The reviewers start by examining the pull request, which includes the new password reset functionality. They look for clear naming conventions, adherence to the project’s style guide, proper error handling, and security implications of the new code. Reviewers often use a checklist to ensure they cover common issues such as SQL injection vulnerabilities when reviewing changes to authentication systems.

Utilizing automated tools like static code analyzers can minimize the issues that reach the peer review phase. Such tools can check code against predefined coding rules, highlighting issues that need attention before human review.

Example: Before the peer review, the code is run through an automated static code analysis tool like SonarQube. The tool flags a potential null pointer exception in one of the new methods, which the engineer fixes before the human code review takes place, saving time during the peer review process.

Feedback should be constructive, aiming to help rather than criticize. Reviewers should focus on the code and the improvement it requires, not on the personal style of the author. The goal is to foster a learning environment where feedback is viewed as a valuable part of professional growth.

Example: A reviewer notices that the password reset token expiration time is set to 24 hours, which may not be secure enough. They suggest in their feedback to shorten this time frame and provide a rationale, linking to best practices in token-based authentication.

Introducing code review metrics helps to measure the efficiency of reviews, analyze the impact of changes, and predict the number of hours required to complete a project. Metrics provide objective measures to structure code reviews around.

Example: The team tracks metrics such as the number of defects found, the time taken to review code, and the frequency of code submissions. Over time, they notice that reviews conducted late in the day tend to have more missed defects. With this insight, they adjust schedules to conduct reviews when reviewers are more alert.

The code author works on the feedback until all parties are satisfied. Only then is the code merged with the common codebase. A good practice is to focus on aspects that bring the most value, aiming for impactful changes rather than perfection.

Example: Once all the feedback has been addressed, the author updates the pull request. The reviewers give a final check to confirm that all their concerns have been resolved. They approve the merge, and the new feature is integrated into the codebase after a successful build and test run.

The code review process should evolve over time as developers learn what is required for healthy code and reviewers learn to respond quickly, reducing unnecessary latency in the process. This ongoing adaptation leads to a more efficient and streamlined review experience.

Example: After several iterations of the code review process, the team realizes that many discussions arise around database schema changes. They decide to document a set of best practices for database updates and include a schema review checklist to reduce the time spent on this aspect in future reviews.

In emergencies where code must be reviewed rapidly, it is acceptable to relax some quality guidelines. However, such situations should be the exception rather than the rule, and the criteria for what constitutes an emergency should be clearly defined.

Example: A critical security flaw is identified in the production environment that needs an immediate fix. The team decides to bypass the usual in-depth code review process to expedite the fix. They still perform a quick review to ensure no other obvious issues are introduced, but they forego the full checklist and thorough examination until after the hotfix is deployed.


This in-depth look at the code review process outlines a structured and efficient approach to examining code. By adhering to these steps and incorporating both manual and automated review tools, development teams can significantly enhance code quality and maintain a robust, stable codebase.

Graphite
Git stacked on GitHub

Stacked pull requests are easier to read, easier to write, and easier to manage.
Teams that stack ship better software, faster.

Or install our CLI.
Product Screenshot 1
Product Screenshot 2