Table of contents
- Why prompting is so important
- What makes a good prompt in code review
- Prompt templates and examples
- [AI code reviews with Graphite Agent](#ai-code-reviews-with-graphites-Graphite Agent)
- Best practices
- FAQ
- Conclusion
AI tools for code review are increasingly common: they detect bugs, enforce style, catch edge cases, and suggest improvements. But they don't always give good output out of the box. Your prompts (i.e. how you ask the AI to review code) have a big effect on what feedback you get — its relevance, accuracy, and depth.
Prompt engineering means designing the input to the AI so that responses are effective, precise, and useful. In code review, you want not just "this is wrong," but why it matters, how to fix, and what context to consider. Without good prompting, AI feedback can be shallow, noisy, or off-target.
Why prompting is so important
Context sensitivity Code review isn’t just about syntax. The utility of a suggestion depends on understanding the project, how modules interact, existing patterns, and performance trade-offs. A prompt that gives context (codebase style, performance prioritization, etc.) lets the AI give tailored feedback.
Precision vs noise trade-off If prompts are vague (“review this code”), the AI may give generic suggestions, or flood you with low-value commentary. Prompts that explicitly ask for issues of certain classes (logic, edge cases, security) and ask that suggestions be actionable reduce false positives and improve signal.
Explainability Good prompts push the AI not just to say what is wrong, but why, and how to fix. This helps developers understand, learn, and decide whether to accept suggestions.
Alignment with team norms Teams have coding standards, performance constraints, style guides, etc. Prompts that embed or reference those norms help the AI align its suggestions.
Efficiency Better prompts reduce back-and-forth, cut review cycles, and allow catching bugs earlier. This saves time.
What makes a good prompt in code review
Feature | Rationale |
---|---|
Explicit scope (e.g. bug detection, security, performance, style) | Narrows focus so AI doesn’t try to cover everything equally. |
Providing context (language, frameworks, constraints, business logic) | Ensures suggestions are relevant and safe. |
Asking for examples/fixes | Makes feedback actionable, not just critical. |
Asking for trade-offs | Encourages balanced suggestions when there are performance vs readability concerns. |
Including style guide references | Helps AI conform to team norms. |
Formatting instructions | Ensures feedback is concise and usable. |
Prompt templates and examples
Template: Generic AI code review
“You are a senior software engineer reviewing a pull request. Language: {lang}, framework: {framework}. The project has style guidelines: {style_guide_details}. Please review the following code and identify: 1. Logic bugs or incorrect behavior 2. Missing edge cases or error handling 3. Performance or resource inefficiencies 4. Security or input validation concerns 5. Style or naming issues For each issue, explain why it matters and suggest a concrete fix with code snippet. If trade-offs exist, mention them.”
Example with JavaScript
function fetchUsers(userIds) {return Promise.all(userIds.map(id => fetch(`/api/user/${id}`))).then(results => results.map(r => r.json()));}
Prompt:
"Review this JavaScript function in a codebase that serves many users. The fetch API is used; security and performance are priorities. Identify any edge cases, error handling issues, and opportunities for optimization. Suggest an improved version of the code."
Expected output:
- Missing error handling if
fetch
fails Promise.all
rejects everything if one fails; maybe usePromise.allSettled
- JSON parsing may fail; wrap
.json()
calls - Potential performance issue with large parallel requests; suggest batching
- Provide a fixed code snippet with error handling and concurrency limits
AI code reviews with Graphite Agent
[Graphite Agent](https://graphite.dev/Graphite Agent) is Graphite's AI-powered code review tool. It automatically reviews pull requests, catches bugs, and suggests fixes before merge. Its key features include:
- Context-aware analysis: Graphite Agent looks at the entire codebase, not just diffs
- Detection of logic bugs, edge cases, security vulnerabilities, and performance issues
- Actionable suggestions with one-click fixes
- Support for custom rules, so teams can encode their own coding standards
- Learning from feedback: Graphite Agent adapts based on how teams accept or reject suggestions
How prompting and configuration matter in Graphite Agent
Even though Graphite Agent automates prompting internally, you still shape its behavior by:
- Enabling or disabling custom rules
- Providing feedback on suggested fixes
- Embedding style guides or project norms into repository configs
- Prioritizing categories like security or performance
Best practices
- Define review goals up front
- Codify your style and standards
- Provide code context in prompts
- Iterate and refine prompts or rules over time
- Balance strictness and flexibility
- Ask for explainable, actionable feedback
- Integrate prompt guidance or rules into your workflow
FAQ
How to best prompt AI for coding?
Be specific about context (language, framework, constraints), define what you want reviewed (bugs, security, performance), and ask for explanations with concrete fixes. Include examples and iterate based on results.
Which AI tool is best for code review?
Popular options include [Graphite Agent](https://graphite.dev/Graphite Agent) for automated PR reviews, GitHub Copilot for inline assistance, and CodeRabbit. Graphite Agent excels for comprehensive automated reviews with context-aware analysis, though.
How to write an effective prompt for AI?
Structure prompts with: role definition ("You are a senior engineer"), context (language/framework), specific instructions (what to look for), output format, and examples. Example: "Review this Python function for security vulnerabilities and performance issues. Explain each problem and provide a fixed version."
What are the most useful AI prompts?
Most effective prompts focus on specific areas: bug detection ("Find logic errors and edge cases"), security review ("Identify vulnerabilities and input validation issues"), performance analysis ("Spot bottlenecks and optimization opportunities"), and code quality ("Check style guides and best practices"). Start with scoped prompts rather than asking for general reviews.
Conclusion
Prompt engineering plays a central role in getting valuable outputs from AI code review systems. Even with tools like [Graphite Agent](https://graphite.dev/Graphite Agent), which provide automated, context-aware reviews, the way you configure, prompt, and give feedback determines how useful the results are. With clear, scoped prompts or rules, explicit context, and ongoing refinement, you can leverage AI to speed up reviews, catch more bugs, and maintain high code quality — without drowning in noise.