Enhancing vibe coding with prompt engineering

Greg Foster
Greg Foster
Graphite software engineer
Try Graphite

Table of contents

Vibe coding refers to an AI-driven programming style where the developer hands over most coding tasks to a large language model (LLM) and focuses on guiding and tweaking the output. Instead of writing every line, you describe the problem in natural language and let the AI generate software, essentially treating the LLM as a coding partner. This approach fits modern workflows for rapid prototyping and low-stakes projects, enabling developers to build applications faster by embracing AI-generated code. In practice, vibe coding feels like putting a car on autopilot: the AI writes the code while the human supervises from the metaphorical back seat.

Achieving success with vibe coding requires prompt engineering, which is the craft of designing effective prompts to direct the AI. Prompt engineering is the practice of carefully crafting and refining instructions to get specific, useful responses from a model. A well-engineered prompt can dramatically improve the quality of generated code. Key principles include:

  • Clarity and specificity: Clearly state what you want the code to do. Ambiguous prompts yield ambiguous code. Provide details like language, frameworks, or functions needed. Remember, being specific doesn't always mean short – longer prompts can supply more context. For example, instead of "Build a web server," you might ask: "Build a Node.js Express server with a /status endpoint that returns JSON uptime stats." This specificity guides the AI to the correct solution.
  • Iterative refinement: Treat prompt development as an iterative process. If the first output isn't right, refine your prompt and try again. Break big tasks into smaller prompts and build iteratively, one change at a time. Start with a simple version of the task, evaluate the result, then adjust your prompt to handle any issues or add complexity.
  • Role and context setting: Utilize system/user roles (if the platform allows) to set context for the AI. A system message can establish the AI's persona or goals (e.g., "You are a senior Python developer who adheres to PEP8 style and security best practices."). The user prompt can then provide the task. By giving the model a role or perspective, you influence the tone and correctness of the output. This mechanism helps align the AI's responses with your development standards.

To illustrate how prompt engineering boosts vibe coding, here are a few real-world prompt patterns developers can use:

  1. Generate code from description: If you need a specific functionality, describe it with requirements and constraints. For example: "Write a Python function parse_log(file_path) that reads a log file and returns a dictionary of error codes to their frequency. Only use the standard library, and handle file I/O errors gracefully." This prompt clearly defines the task, naming conventions, and constraints (standard library only), leading the LLM to produce a targeted solution. Such detailed prompts yield more accurate code on the first try.
  2. Refactor existing code: You can supply a snippet and ask the AI to improve it. For instance: "Here is a Java method for bubble sort. Refactor it to use merge sort for better efficiency, and ensure it follows our coding style guidelines (no var names shorter than 3 chars, include javadoc)." By giving both the code and explicit refactoring goals, the AI can transform the implementation while adhering to style rules. The specificity (merge sort, style guidelines) guides the AI's changes.
  3. Explore an unfamiliar API: LLMs can help you quickly learn new libraries. For example: "Using the GitHub REST API in Python, show how to list all repositories for a user. The code should use the requests library and handle authentication with a token." This prompt not only requests an example, but also specifies the tools to use and a concern (authentication). The AI will likely respond with a code example using requests, demonstrating proper API calls and token usage. By prompting with such context, you effectively ask the AI to act as documentation, accelerating your onboarding with the API.

Each of these examples shows the importance of being explicit about the outcome. By stating the goal and constraints, you steer the AI's "vibe" in the right direction and reduce the need for guesswork in the output.

Even in vibe coding, a little upfront effort in crafting prompts can save time. Consider these strategies to get higher-quality code from LLMs:

  • Provide structured context: Instead of a one-liner, structure prompts with sections for context, task, and constraints. Clearly explain the project or feature background, then the specific request. For example, start with a brief app description ("I'm building a task manager web app..."), then the task ("...I need a function to add a new task to the database..."), and finally any constraints ("...use async/await, and no external libraries"). This approach ensures the AI understands the broader scenario and the specific requirements.
  • Use explicit constraints and "don'ts": Tell the AI what not to do. If you want it to modify only the UI and not business logic, say so. For instance: "Update the UI layout for mobile responsiveness without changing any backend logic. Use Tailwind CSS breakpoints and do not alter existing IDs or classes." Explicit instructions like "don't change X" help narrow the AI's focus. This reduces the chance of unwanted side effects in the generated code.
  • Give examples (few-shot prompting): When possible, show the AI an example of the format or style you expect. For instance, before asking it to produce a function, you might provide a small, correct example function with comments, then ask it to do something similar for a different case. Demonstrating the pattern can orient the model to produce output consistent with your expected style or structure.
  • Specify output format: Be clear about how you want the answer. Do you want just the code snippet, or code with explanation? If you only want code, prompt the model with "Provide only the final code, no additional explanation." Conversely, if you're exploring, you might ask for a step-by-step reasoning or usage example. Setting format expectations prevents the need to manually trim responses and keeps the AI on target.

By applying these strategies, developers can significantly improve the reliability of vibe coding. In short, the more guidance and context you give the AI, the closer the output will align with your needs.

While vibe coding can accelerate development, it comes with risks if code is not properly reviewed. AI-generated code might be syntactically correct and even pass basic tests, but can still hide logical bugs or security issues. This is fine for throwaway projects, but dangerous for production code. Issues like insecure patterns, performance pitfalls, or misinterpreted requirements can slip through. Never assume AI-written code is production-ready without oversight. Neglecting review could lead to shipping bugs or vulnerabilities that a human would catch with a careful look. In the modern workflow, this means that even when vibe coding speeds up initial development, a safety net is needed before merging or deploying the code. Learn more about how to review code written by AI to ensure quality and security.

To mitigate these risks, developers are turning to AI-assisted code review tools. Graphite's Diamond is an AI-powered code review companion designed to catch problems in AI-generated (and human-written) code before they reach production. Diamond automatically analyzes each pull request and provides immediate, context-aware feedback on potential issues. It flags logical errors and edge cases that an AI might have overlooked, catches deviations from coding standards, and highlights security or performance concerns – all within seconds of code submission. For example, Diamond can detect if the code introduced a common bug or an insecure pattern, and it will alert the developer with a concise explanation and even a suggested fix.

Diamond's strength lies in being codebase-aware and customizable. It uses your project's context and historical data to avoid false positives, focusing only on real issues (a high-signal approach). You can also tailor Diamond with your team's style guides and rules, so it will enforce naming conventions, formatting, or other best practices automatically. In effect, Diamond acts as a diligent reviewer that never gets tired: it will catch bugs before your human reviewers do, enforce quality and coding standards, and help conduct faster code reviews. By integrating Diamond into your workflow, vibe-coded projects are no longer at the mercy of unchecked AI output. The AI reviewer will point out mistakes and improvements, from missed null checks to inconsistent function naming, giving developers a chance to fix issues prior to merging. It even provides one-click fixes for many suggestions, streamlining the remediation process. For more information on implementing AI code review in your workflow, check out our guide on AI code review implementation and best practices.

Vibe coding offers an exciting boost to developer productivity by letting us prototype and build features with unprecedented speed. By applying solid prompt engineering practices – specificity, iteration, context setting, and clear constraints – developers can better channel an LLM's capabilities to get reliable code in this intuitive, rapid style of development. However, speed should not come at the expense of quality. Pairing vibe coding with rigorous review is essential. Tools like Graphite's Diamond AI code reviewer serve as a safety net, catching bugs and enforcing best practices automatically so that AI-generated code meets the standards of production software. Embracing vibe coding doesn't mean abandoning engineering discipline; with smart prompting and AI-powered review, developers can enjoy the best of both worlds – coding at the speed of AI while maintaining confidence in the code that results.

Built for the world's fastest engineering teams, now available for everyone