Skip to content

Meet Graphite Agent — your collaborative AI reviewer, built right into your PR page.

Read more

Effective prompt engineering for AI code reviews

Greg Foster
Greg Foster
Graphite software engineer
Try Graphite

Table of contents

AI tools for code review are increasingly common: they detect bugs, enforce style, catch edge cases, and suggest improvements. But they don't always give good output out of the box. Your prompts (i.e. how you ask the AI to review code) have a big effect on what feedback you get — its relevance, accuracy, and depth.

Prompt engineering means designing the input to the AI so that responses are effective, precise, and useful. In code review, you want not just "this is wrong," but why it matters, how to fix, and what context to consider. Without good prompting, AI feedback can be shallow, noisy, or off-target.

  1. Context sensitivity Code review isn’t just about syntax. The utility of a suggestion depends on understanding the project, how modules interact, existing patterns, and performance trade-offs. A prompt that gives context (codebase style, performance prioritization, etc.) lets the AI give tailored feedback.

  2. Precision vs noise trade-off If prompts are vague (“review this code”), the AI may give generic suggestions, or flood you with low-value commentary. Prompts that explicitly ask for issues of certain classes (logic, edge cases, security) and ask that suggestions be actionable reduce false positives and improve signal.

  3. Explainability Good prompts push the AI not just to say what is wrong, but why, and how to fix. This helps developers understand, learn, and decide whether to accept suggestions.

  4. Alignment with team norms Teams have coding standards, performance constraints, style guides, etc. Prompts that embed or reference those norms help the AI align its suggestions.

  5. Efficiency Better prompts reduce back-and-forth, cut review cycles, and allow catching bugs earlier. This saves time.

FeatureRationale
Explicit scope (e.g. bug detection, security, performance, style)Narrows focus so AI doesn’t try to cover everything equally.
Providing context (language, frameworks, constraints, business logic)Ensures suggestions are relevant and safe.
Asking for examples/fixesMakes feedback actionable, not just critical.
Asking for trade-offsEncourages balanced suggestions when there are performance vs readability concerns.
Including style guide referencesHelps AI conform to team norms.
Formatting instructionsEnsures feedback is concise and usable.

“You are a senior software engineer reviewing a pull request. Language: {lang}, framework: {framework}. The project has style guidelines: {style_guide_details}. Please review the following code and identify:  1. Logic bugs or incorrect behavior  2. Missing edge cases or error handling  3. Performance or resource inefficiencies  4. Security or input validation concerns  5. Style or naming issues For each issue, explain why it matters and suggest a concrete fix with code snippet. If trade-offs exist, mention them.”

Terminal
function fetchUsers(userIds) {
return Promise.all(userIds.map(id => fetch(`/api/user/${id}`)))
.then(results => results.map(r => r.json()));
}

Prompt:

"Review this JavaScript function in a codebase that serves many users. The fetch API is used; security and performance are priorities. Identify any edge cases, error handling issues, and opportunities for optimization. Suggest an improved version of the code."

Expected output:

  • Missing error handling if fetch fails
  • Promise.all rejects everything if one fails; maybe use Promise.allSettled
  • JSON parsing may fail; wrap .json() calls
  • Potential performance issue with large parallel requests; suggest batching
  • Provide a fixed code snippet with error handling and concurrency limits

[Graphite Agent](https://graphite.dev/Graphite Agent) is Graphite's AI-powered code review tool. It automatically reviews pull requests, catches bugs, and suggests fixes before merge. Its key features include:

  • Context-aware analysis: Graphite Agent looks at the entire codebase, not just diffs
  • Detection of logic bugs, edge cases, security vulnerabilities, and performance issues
  • Actionable suggestions with one-click fixes
  • Support for custom rules, so teams can encode their own coding standards
  • Learning from feedback: Graphite Agent adapts based on how teams accept or reject suggestions

Even though Graphite Agent automates prompting internally, you still shape its behavior by:

  • Enabling or disabling custom rules
  • Providing feedback on suggested fixes
  • Embedding style guides or project norms into repository configs
  • Prioritizing categories like security or performance
  1. Define review goals up front
  2. Codify your style and standards
  3. Provide code context in prompts
  4. Iterate and refine prompts or rules over time
  5. Balance strictness and flexibility
  6. Ask for explainable, actionable feedback
  7. Integrate prompt guidance or rules into your workflow

Be specific about context (language, framework, constraints), define what you want reviewed (bugs, security, performance), and ask for explanations with concrete fixes. Include examples and iterate based on results.

Popular options include [Graphite Agent](https://graphite.dev/Graphite Agent) for automated PR reviews, GitHub Copilot for inline assistance, and CodeRabbit. Graphite Agent excels for comprehensive automated reviews with context-aware analysis, though.

Structure prompts with: role definition ("You are a senior engineer"), context (language/framework), specific instructions (what to look for), output format, and examples. Example: "Review this Python function for security vulnerabilities and performance issues. Explain each problem and provide a fixed version."

Most effective prompts focus on specific areas: bug detection ("Find logic errors and edge cases"), security review ("Identify vulnerabilities and input validation issues"), performance analysis ("Spot bottlenecks and optimization opportunities"), and code quality ("Check style guides and best practices"). Start with scoped prompts rather than asking for general reviews.

Prompt engineering plays a central role in getting valuable outputs from AI code review systems. Even with tools like [Graphite Agent](https://graphite.dev/Graphite Agent), which provide automated, context-aware reviews, the way you configure, prompt, and give feedback determines how useful the results are. With clear, scoped prompts or rules, explicit context, and ongoing refinement, you can leverage AI to speed up reviews, catch more bugs, and maintain high code quality — without drowning in noise.

Built for the world's fastest engineering teams, now available for everyone