Optimize Cursor Workflow
image created using ChatGPT-o4

Optimize Cursor Workflow

I still remember the first time I opened Cursor and saw code suggestions before I even finished my thought—it felt like having an expert pair programmer by my side.

If you’re building AI-powered apps or websites in JavaScript/TypeScript or Python, you know how fast things can get messy without a solid workflow.

That’s why I created Optimize Cursor Workflow: a friendly, bite-sized playbook designed to help you:

  • Plan your project with precision
  • Write & test in small, confidence-boosting loops
  • Track changes seamlessly with Git
  • Harness Cursor’s secret weapons (YOLO mode, agent modes, and more)
  • Integrate GitHub Copilot & Cline for extra AI horsepower
  • Customize everything with plug-and-play .cursorrules templates

Give it a try, tweak it to your style, and let me know which tip transforms your next sprint!

Planning Your Project with AI Assistance

Before jumping into coding, take time to plan out the project and features. A clear plan will guide both you and the AI, reducing confusion and false starts:

  • Create a Detailed Plan (possibly with another AI): It can help to ask a planning-focused AI (like Anthropic’s Claude or ChatGPT) to outline a solution in Markdown. Prompt it to ask clarifying questions and refine its plan. Save this plan in a file (e.g. PLAN.md or instructions.md in your repo) that you and Cursor can reference. For example: “I tell ChatGPT what I want to create, then ask it to write step-by-step instructions for another AI which will do the coding. I paste that plan into Cursor’s composer.” This extra planning layer can reduce missteps by giving Cursor a well-structured roadmap.
  • Iterate and Refine the Plan: Don’t accept the first plan blindly. Have the AI critique its own plan, improve it, and include important details (data models, API endpoints, UI components, etc.). Once satisfied, store it in PLAN.md and commit it to the repo for easy reference.
  • Use the Plan During Development: Continuously refer Cursor to your plan. In chats or commands, mention the plan file (using @PLAN.md in your prompt) so the AI knows the desired end state. This keeps the AI’s work aligned with your overall design. For example, you can prompt: “The desired state is in @PLAN.md. Please update the @codebase to match.” This way, your prompts stay short while the AI has full context from the plan.
  • Scope Small Chunks: Plan the project in terms of small, incremental tasks. Break features into bite-sized pieces that can be implemented and tested independently. This makes it easier to focus the AI on one goal at a time and avoids overloading the context with too many concerns.

Setting Up Project Guidelines with .cursorrules

Cursor allows you to define project-specific AI rules in a .cursorrules file. This file provides persistent guidelines (like a project-specific system prompt) that the AI will always follow when working on your code. Setting up some broad rules for your project can greatly improve consistency and productivity:

  • What are Cursor Rules: They are custom instructions for the AI to enforce coding standards, architectural patterns, or workflow preferences across your project. By creating a .cursorrules file in your project root, you “teach” the AI how to behave in your codebase. For example, you can instruct it on code style, library usage, or that you prefer test-driven development. These rules become part of the AI’s context in every prompt.
  • Why use .cursorrules: They ensure the AI’s code suggestions align with your needs. You can enforce consistency in style (naming conventions, formatting), include project-specific knowledge (framework choices, file structure), and automate best practices (like always writing tests or handling errors). In a team, a shared rules file means everyone gets the same AI guidance.
  • Keep Rules Focused: Write rules as if they were guidelines in a developer manual. Good rules are concise, actionable, and specific. Avoid very generic advice; instead, include exact preferences (e.g. “Use functional components and hooks instead of class components in React” or “Follow PEP8 naming and include type hints in all Python functions”). If you have many guidelines, consider splitting them into multiple rule files (Cursor supports multiple files under .cursor/rules/ for different scopes). A rule file under 500 lines is a good target.
  • Example – Enforce TDD: You might add a rule instructing the AI to follow a test-driven approach. For instance: “Write tests first, then the code, then run the tests and update the code until tests pass.” By including this in .cursorrules, you ensure the AI always attempts to write a failing test before implementation, mimicking TDD. (We’ll provide a full template later.)
  • Activating Rules: Once you create or update a .cursorrules file, Cursor will automatically apply it. These rules persist across chat sessions. You can see active rules in Cursor Settings > Rules. If needed, you can disable or scope rules (the .mdc rule format allows setting rules to apply only to certain file patterns or only when manually invoked). For most broad project rules, you’ll set them as “Always” apply so they’re always in context.
  • Find Community Rules: Don’t reinvent the wheel. The Cursor community has shared many rules files for various frameworks and use cases (see cursor.directory or the “Awesome CursorRules” GitHub repo). You can browse these resources to find a starting point that fits your project and adapt it. For example, if you’re building a Next.js app or a FastAPI service, you may find a premade .cursorrules template with best practices.

Writing Code in Small, Iterative Loops

With a plan in hand and rules in place, you can start coding. The key to using Cursor effectively is to work in small, iterative edit-test loops and leverage the AI’s strengths without losing control of your codebase. Here’s how to approach writing code with Cursor:

  • Follow a Edit–Test Loop (TDD-style): A highly effective workflow is to implement features incrementally using test-driven development. This aligns with how Cursor’s agent works best (it can write code and tests). For each small feature or change:
  • This red-green-refactor cycle keeps the AI focused and significantly reduces bugs. The AI essentially serves as a pair programmer following TDD principles, which is very powerful for AI-assisted development. Remember that if you include the rule “write tests first” in your .cursorrules, Cursor will already be inclined to do this by default.
  • Encourage Step-by-Step Reasoning: When giving instructions to Cursor, chain your prompts with reasoning. For complex tasks, you can prompt something like: “Let’s break this down. First, outline how you plan to implement feature X step by step.” By encouraging the AI to think out loud (chain-of-thought), you often get more reliable code. Cursor’s agent mode already does multi-step planning, but explicitly asking for the plan can help if it’s a very tricky problem. You can even have the AI present a pseudo-code solution first, verify it, then ask it to write the actual code.
  • Use Cursor’s Coding Features: Cursor isn’t just a chatbox; it’s an IDE with multiple AI-driven tools. Use each for what it’s best at:
  • Keep Each Prompt Focused: Tackle one well-defined task at a time in the chat. For example, instead of saying “Build my entire website’s backend,” start with “Create a Flask route for user login that validates credentials and returns a JWT.” Once that’s done (with tests), move to the next endpoint. Focus helps the AI produce relevant, correct code.
  • Review AI Output Critically: Always review and test code generated by Cursor. The AI can and will make mistakes or design choices you wouldn’t. Treat the AI’s code as a first draft. Verify that it meets requirements and adjust anything that’s off. If the AI’s solution is wrong, you can undo the changes (Cursor keeps a history of file modifications) or ask it to try a different approach. You are the senior developer in the loop – the AI is an assistant that sometimes needs guidance.
  • Use Source Documentation for Clarity: If using a new library or API, provide documentation links to Cursor via @ references or just paste in function signatures. This helps the AI use the APIs correctly. (Cursor is pretty good at searching its indexed docs and your codebase, but explicit docs never hurt.)

Debugging and Troubleshooting with Cursor

Even with planning and rules, you’ll encounter bugs or situations where Cursor’s output isn’t right. Here’s how to leverage Cursor (and other AI tools) to debug effectively:

  • Leverage Cursor’s Understanding: If you get an error or a failure, share it with Cursor. For example, if a test fails or you see an exception, copy the traceback or error message into the chat and ask Cursor for insight. Often, it will analyze the error and suggest a fix.
  • Ask for a Diagnostic Report: A pro-tip when stuck is to have Cursor explain the code behavior by adding logging or print statements, then analyze the output. You can prompt Cursor: “Please add logs to the code to get better visibility into what’s happening, so we can find the bug.” Cursor will insert logging at key points in your code. Run the instrumented code (e.g. run your app or tests) to collect the log output. Then copy those logs back to Cursor and ask: “Here’s the log output. What do you deduce is causing the issue, and how can we fix it?” This method gives the AI concrete runtime info to work with. It’s like having the AI as a junior dev who first helps gather data and then debugs based on that data. This can quickly pinpoint logical errors that static analysis might miss.
  • Chain-of-Thought for Debugging: Encourage the AI to reason about the bug. For instance, “Given this failure, what are the possible causes? Let’s think step by step.” This can lead Cursor to systematically eliminate possibilities or to design a test to reproduce the bug. Sometimes the act of explaining the problem can even reveal the issue (to you or the AI).
  • Use Another Model’s Perspective: If Cursor is stuck in a loop or giving unsatisfactory answers, consider using a second opinion. For example, ask ChatGPT or Claude in a separate session to analyze the problem (you can paste the relevant code and error). They might offer a different solution. In one anecdote, a developer spent hours with Cursor on a bug, then asked ChatGPT-4 to “write clear instructions for another coding AI to fix this” – the fresh perspective resolved the issue quickly. Don’t hesitate to temporarily step outside Cursor if needed.
  • Reset Context if Necessary: Long chat histories can sometimes cause the AI to become confused or fixated on a wrong approach. If things go off track, you can reset the conversation (start a new chat in Cursor) and give a fresh prompt with only the relevant context. Often, a clean slate yields better results than fighting a broken context.
  • Use Git History to Your Advantage: If the AI’s changes introduced a bug and it’s not clear what went wrong, use git diff or Cursor’s source control tab to see exactly what changed. You can then either revert that part or pinpoint the problematic code to fix. You can even show the diff to Cursor and ask, “Which of these changes might cause issue X?”
  • Keep Problem Files in Context: When debugging, make sure all relevant files are included via @ references in the conversation (or open them side by side and use Cursor’s “Reference open editors” feature to pull them in). This ensures the AI isn’t missing a piece of the puzzle.
  • Utilize Ask Mode for Explanations: Cursor has an Ask mode (in addition to Agent mode) which is more like Q&A – it won’t try to execute changes, it just answers questions. If you want an explanation of a piece of code or concept without any code editing, use Ask mode (you can toggle modes in the chat UI). This is safer for just understanding something.
  • Stay Patient and Guide the AI: Debugging with AI is still a collaborative process. Sometimes Cursor will propose a fix that doesn’t work – treat it like you would a junior developer’s attempt. Analyze why it didn’t work and give further guidance: “That didn’t solve it; now the error is Y. Let’s try a different approach.” With each iteration, the AI hones in on the solution.

Using Version Control Effectively in Cursor

Integrating Git version control into your AI-assisted workflow is crucial for maintaining code quality and being able to undo mistakes. Cursor integrates well with git, so take advantage of it:

  • Commit Early, Commit Often: Don’t let your working directory diverge massively while the AI is making many changes. It’s recommended to make frequent git commits after each successful small change or feature. This way, if the AI’s subsequent suggestions go awry, you have a recent commit to roll back to. Frequent commits also help you isolate where a bug might have been introduced (via git bisect or diff).
  • Use Branches for Big Experiments: If you’re about to attempt a risky refactor with Cursor’s agent, consider doing it on a separate git branch. That way your main branch stays stable. You can always merge once you’re confident in the changes.
  • Leverage Cursor’s Commit Message Generator: Cursor can auto-generate commit messages for you. In the Source Control panel, after staging changes, click the ✨ magic wand icon to have Cursor draft a commit message. It will summarize what changed. Always read and edit this message for accuracy, but it provides a nice starting point. This saves time and encourages you to commit more frequently since writing commit messages becomes less tedious.
  • Use Git Diff and History in Prompts: If you’re not sure about a change, you can copy a git diff into a Cursor chat and ask for explanation or code review. E.g., “Here is the diff of my changes, do you spot any issues?” The AI might catch a logical error or suggest improvements.
  • Avoid Too Many Uncommitted Changes: Large unsaved changesets can confuse the AI’s context (since Cursor’s view of the code might be out of sync). It can also make it harder for you to understand what’s going on. If you’ve accumulated a lot of modifications, either commit them (if they are working) or stash them, then apply piece by piece. This also ties into context management – a smaller diff is easier for the AI to reason about if you share it.
  • Use Git Integration for Safety: Cursor’s UI will show modified files; use that as a cue to review changes. If the AI modifies multiple files, check each one before committing. Version control is your safety net – if an AI-generated change is wrong, you can revert the file or reset to last commit easily.
  • Generate Pull Requests & Reviews (if applicable): If you’re working in a team or even for yourself, you can use Cursor to help write a pull request description, or even to review code. For example, after committing, open a new Cursor chat with the diff and ask, “Review these changes for any potential issues or improvements.” This can simulate a code review.

Remember, Git + AI is powerful: you can fearlessly let the AI try things, because you can always undo. Just make sure to commit when things are in a good state.

Maintaining Context and Codebase Indexing

Cursor’s ability to index your entire codebase is one of its strongest features – it “knows” your code so you don’t have to paste everything. However, you need to manage context to keep the AI efficient and accurate:

  • Explicitly Reference Relevant Files: By default, Cursor will try to pull in relevant files based on your prompt, but it’s not magic – you often should explicitly reference the important files. Use the @ symbol followed by a filename (or just the name if it’s unique) in your prompt to include that file’s contents. For example: “Update the function in @utils.py to use the new API.” This ensures the AI sees utils.py content. You can do this for multiple files, like @models/user.py and @controllers/auth.py, in one prompt if the change spans them. Keeping the context focused on just a few files will yield more specific and correct suggestions.
  • Limit Unnecessary Context: The length of context is limited (even if Cursor uses large context models, longer context can degrade quality). Don’t stuff every file into the prompt “just in case.” Include only what’s needed (data models, function definitions, etc. that are relevant to the task). If you notice the AI giving overly general or off-target answers, check if you have extraneous context that might be distracting it.
  • Use “/Reference Open Editors”: As a shortcut, Cursor often provides a way to reference all currently open editor tabs. If you have opened the files you care about, you can use a slash command like /reference open (or use the UI button) to quickly include them. This saves time and ensures no important file is forgotten in the prompt.
  • Refresh Index After Big Changes: Cursor’s code index should update automatically as you edit, but if you do a large refactor or git pull, it’s wise to re-index. You can trigger a reindex via the command palette (Cmd+Shift+P) with “Reindex Codebase” or by clicking the indexing status if visible. This makes sure the AI’s knowledge of the code is up to date, especially before asking questions like “Find all uses of X” or expecting it to know about newly added files.
  • Exclude Irrelevant Files: For very large projects, or projects with lots of generated files, use a .cursorignore file (in root) to exclude files or directories from indexing (similar to .gitignore). For example, you might ignore node_modules, build output directories, large JSON data, etc. Excluding irrelevant parts of the codebase means Cursor will index and search the important parts faster and not confuse the AI with noise. It also can be important for private info (you might ignore config files with secrets). The patterns in .cursorignore use the same syntax as gitignore. By keeping the index focused, you improve both performance and answer quality.
  • Restart Context When Needed: If you’ve been working in one Cursor chat for a long time, the conversation might carry a lot of baggage. Don’t be afraid to start a fresh chat tab after a while. You can load the latest state of relevant files and continue. This “context reset” can reduce confusion from earlier parts of the conversation that are no longer relevant.
  • Use the Model Context Protocol (MCP) if available: (This is an advanced tip.) Cursor’s enterprise versions or advanced settings allow connecting to external context providers (like an external documentation index). If you have access to something like that (listed under MCP Servers), you could integrate additional context like documentation or knowledge bases for the AI to draw on. But for most workflows, the built-in index of your repo is sufficient.

In summary, feed the AI just the right context: not too little (or it will guess and make mistakes), and not too much (or it might get lost or hit token limits). Regularly curate what you’ve given it to keep the focus sharp.

Using AI Agent Modes Effectively

Cursor offers different AI “agent modes” for how hands-on or autonomous the AI assistant should be. The default mode is Agent mode, which is the most powerful – it can use all tools (read files, edit code, run terminal commands, browse web, etc.) to complete tasks with minimal user input. To make the most of Cursor:

  • Understand Agent Mode: In Agent mode, Cursor behaves like an autonomous pair programmer. You give it a high-level instruction, and it will: understand the request, search your codebase, plan the necessary changes, execute those changes across files, and even run commands/tests if needed. It’s designed for complex tasks like “Refactor the database layer to use transactions” – it might touch multiple files and verify the app still runs. Agent mode is extremely powerful, but you need to supervise its actions (review the diffs it proposes) before applying them. Think of it as letting the AI drive, with you in the passenger seat ready to grab the wheel if it veers off.
  • Use Ask or Manual Mode for Simpler Queries: If you just have a question (“What does this function do?”) or want a quick snippet without file changes, you can switch to Ask Mode (Q&A only, no code execution) or Manual Mode (AI will only suggest changes, not auto-apply anything). These modes give you more control. In practice, many users stick with Agent mode and simply double-check everything, but know that you can dial back the autonomy if you prefer to apply changes manually.
  • Enable YOLO Mode for Autonomous Execution: YOLO mode is a special setting in Cursor that makes the agent even more autonomous. With YOLO mode on, the agent can execute terminal commands on its own without asking for confirmation. This is especially useful for running tests, build tools, or linting in a loop to validate its code changes. Essentially, YOLO lets the AI “press the Run button” by itself. For example, if you ask the agent to implement a feature and you have YOLO enabled with tests allowed, it will write tests, run them, see failures, adjust code, and repeat – all automatically – until tests pass. This can feel magical: you sit back and watch the AI iterate on your code. To use YOLO mode:
  • Multi-step Problem Solving: Agent mode naturally breaks tasks into substeps. You can help it by explicitly stating the substeps or using a checklist. For example: “To implement this, you should 1) update the schema in models.py, 2) adjust the controller logic, 3) update the unit tests.” The agent will then go through these steps. This ensures it doesn’t skip any part of the task.
  • Tool Usage Clarity: The agent has tools like a web browser, code search, documentation search, etc. You might notice in verbose mode that it says things like “Tool: searching for X”. This is normal – it might search the web if you allowed it and it needs an answer (like how to use a library). If you see it doing something odd (e.g., searching repeatedly), you might need to give it a hint or provide the info directly. For instance, if it’s stuck searching for an API usage, just paste a link or code snippet from the docs for it.
  • Ask for Explanations: You can always ask the agent why it made a certain change. This can be educational and also a sanity check. If it renamed a function or added a check, and you’re not sure why, just query: “Explain why you made these changes.” A thoughtful agent should justify its reasoning, which can reveal if it misunderstood something.
  • Switch Models if Needed: Cursor typically uses frontier models (like Claude-2 or GPT-4) for the agent. If you find the responses aren’t great, you can experiment with different models in settings (for example, maybe a GPT-4 model if available, or smaller ones for faster but less nuanced suggestions). Also, keep Cursor updated to get the latest model improvements.

In short, Agent mode with YOLO is like an autopilot – incredible for speeding up development, but you must remain the pilot in command. Use its autonomy to handle grunt work (running tests, making trivial fixes), while you make the high-level decisions.

Integrating Cursor with Copilot and Cline

Cursor itself is a full AI coding assistant, but you might want to use it alongside other AI tools like GitHub Copilot or Cline to enhance your workflow. Here’s how you can integrate them:

  • GitHub Copilot in Cursor: Since Cursor is based on VS Code, you can install the GitHub Copilot extension in Cursor just as you would in VS Code. This allows Copilot to provide its own suggestions (ghost text). However, you should avoid having Cursor’s Tab completions and Copilot both giving suggestions at the same time, as they might overlap or conflict. In practice, you might temporarily disable Cursor’s completion if you want to compare Copilot, or vice versa. One user confirms: “I use Copilot with Cursor, and there are no conflicts unless you enable autocomplete on both at the same time.”. So it’s doable – just ensure only one AI is driving the autocompletion at once.
  • Cline Integration: Cline is another AI coding assistant (often used via a VS Code extension or CLI) that many consider more advanced in some respects. The good news is you can use Cline within Cursor to get the best of both. In fact, some power users run Cline’s VS Code extension inside Cursor, effectively stacking the tools. What Cline offers:
  • Using Cline in Cursor: To integrate, you’d install Cline’s VS Code extension into Cursor (Cursor supports most VSCode extensions). Once configured with your API key, you can invoke Cline similarly to Cursor’s agent (likely via commands or chat provided by Cline’s extension). Some developers run Cline for heavy lifting and use Cursor’s own AI for quick completions: “I use Cline within Cursor. That way, I get the simple code completion from Cursor, while also using Cline for big changes that save me a lot of time.”. This hybrid approach can indeed give you an “AI pair programming team” – Cursor (Claude) for one perspective, Cline (GPT-4 or others) for another.
  • Alternate AI Tools: Besides Copilot and Cline, there are other AI dev tools (Replit’s Ghostwriter, AWS CodeWhisperer, etc.). Integrating those into Cursor might not be straightforward or necessary. Cursor by itself covers a lot of ground. But if you have specialized needs (maybe a security analyzer AI or a documentation generator), see if there’s a VSCode extension – many will work in Cursor.
  • Don’t Overwhelm Yourself: While integrating multiple AI can be powerful, it can also become confusing if they give differing suggestions. It’s usually best to drive one AI at a time. Use another as a supplemental second opinion or for specific strengths. Over time, you’ll learn which “assistant” to ask for which type of task, much like delegating to different team members.

Finally, remember that these AI tools are here to assist, but you maintain the architectural vision and final say. By planning carefully, enforcing best practices through rules, coding iteratively with tests, and using Cursor (and friends) intelligently, you can dramatically speed up development while keeping code quality high.




Below are several ready-to-use .cursorrules templates you can add to your project. Each is tailored to a specific scenario (Python AI projects, TypeScript web apps, refactoring, and TDD). You can place the content in a .cursorrules file (or a file under .cursor/rules/) in your repository. Feel free to modify these to suit your project’s exact needs.

1. .cursorrules Template for Python AI Features (Python project)

description: Python AI Project Guidelines

alwaysApply: true

---

You are an AI assistant specialized in Python AI development. Your approach emphasizes clean, idiomatic Python and thorough testing.

Follow these rules when writing or modifying code:

- Coding Style & Standards: Follow PEP 8 style guidelines for formatting. Use snake_case for variable and function names and CamelCase for class names. Always include type hints for function signatures and return types (PEP 484). Use f-strings for string formatting (no % or format() unless necessary).

- Project Structure: Maintain a clear project structure. Keep modules organized (for example, ML models in a models/ directory, utility functions in utils/, etc.). If creating new modules or packages, update __init__.py accordingly. Respect separation of concerns (e.g., data loading vs. processing vs. model inference should be in different functions or classes).

- AI/ML Best Practices: When writing code for AI features:

  - Prefer using well-known libraries (e.g., NumPy, pandas, scikit-learn, PyTorch, TensorFlow) rather than reinventing algorithms, unless instructed otherwise.

  - Ensure reproducibility: if randomness is involved (training, simulations), use random seeds and document them.

  - Optimize for clarity over cleverness. Use list comprehensions or generator expressions for concise data transformations, but avoid overly complex one-liners.

- Error Handling: Implement robust error handling. Use try/except blocks to catch exceptions especially around model loading, file I/O, or external API calls. Log or print informative error messages that include context (but avoid exposing sensitive info).

- Logging & Debugging: Use Python’s logging library for debug output (configured via a global logger) instead of print statements. Add logs at key points (start/end of major functions, upon catching exceptions) to aid debugging.

- Documentation: Every function and class should have a clear docstring explaining its purpose, inputs, outputs, and exceptions (follow PEP 257 conventions). When you modify existing code, update outdated comments or docstrings. For AI algorithms, briefly explain the approach or formula in comments if not obvious.

- Testing: For any new feature or bug fix, also generate a corresponding unit test (using pytest). Tests should cover both typical cases and edge cases. Place tests in the tests/ directory mirroring the package structure. Ensure tests have assertions for correctness and also test error conditions (e.g., passing invalid input).

- Data Handling: When dealing with data (e.g., datasets, JSON input/output), always include validation. Use type checks or pydantic models (if available) to validate data structures. Handle cases where data is missing or in an unexpected format by raising errors or using default values.

- Performance Considerations: If a section of code is performance-critical (e.g., a tight loop processing data), prefer vectorized operations with NumPy/pandas. However, first write a correct solution, then optimize if needed (don’t prematurely micro-optimize). Document any performance hacks or non-intuitive code.

- Security: If this project involves user input or external data (e.g., an AI API receiving requests), always sanitize inputs. Avoid using eval or other unsafe operations on data. Secure any credentials or API keys by using environment variables (do not hard-code secrets in code).

- AI Model Usage: If integrating with AI models (e.g., calling an OpenAI API or loading a machine learning model), encapsulate those calls in dedicated functions or classes (e.g., a ModelClient class). This makes it easier to mock them in tests and swap implementations. Always check responses from an AI API for errors or empty results before using them.

When following these rules, prioritize clarity, correctness, and safety of the code. Aim to produce code that a senior Python developer would approve of in a code review, with well-chosen abstractions and adherence to our project’s standards.

2. .cursorrules Template for TypeScript AI-Driven Web App (Node/React project)

description: TypeScript Web App Best Practices

alwaysApply: true

---

You are an AI assistant specialized in TypeScript web applications (Node.js backend and React frontend). Ensure code is high-quality, maintainable, and follows modern best practices.

Follow these rules in this project:

- TypeScript Strictness: Always use TypeScript with strict mode enabled. Every function, variable, and prop should have an explicit type. Prefer using interfaces for object shapes (or type aliases for simple function types), and use type for unions or complex mapped types when needed. Do not use any unless absolutely unavoidable (and if so, explain why in a comment). Leverage generics for reusable function and component types.

- Functional Programming Style: Emphasize functional and declarative patterns. Avoid introducing class-based singletons or unnecessary OOP patterns in frontend code – prefer functional components and hooks in React, and pure functions or lightweight classes in backend where appropriate:contentReference[oaicite:41]{index=41}. For example, in React components, use hooks (`useState`, useEffect, etc.) instead of legacy class lifecycle methods. In Node, favor composition over inheritance.

- Code Structure: Keep code modular:

  - In React, organize components by feature (e.g., a directory per feature containing its components, styles, and tests). Use kebab-case for file and folder names (e.g., user-profile/ contains user-profile.tsx and related files).

  - Separate concerns: logic that fetches or computes data should be outside of UI components (e.g., use custom hooks or utility modules). In Node backend, separate routes, controllers, services, and models into their own modules.

  - Each file should ideally export a single main thing (one component or one class or one function). Use named exports and avoid default exports for clarity.

- Naming Conventions: Use descriptive names. Functions and methods should have verb-based names (e.g., calculateEmbedding, fetchUserData). React components should be PascalCase (matching their file name). Use camelCase for variables. Use UPPER_CASE for constants and enum members. Avoid abbreviations that aren’t obvious.

- UI Development (React): 

  - Use JSX/TSX with functional components. Always define component Prop types via an interface or Props type. For state management, prefer React’s Context or a state library (if the project uses Redux/Zustand, etc., follow that pattern consistently).

  - Styling: Follow the project’s styling approach. If using CSS modules or styled-components, keep styles co-located with components. If using Tailwind or utility classes, consistently apply them in JSX (and avoid raw CSS unless needed). Always ensure responsive design (use flex, grid, etc. as per guidelines).

  - Error boundaries: If a component can throw or a promise can reject (like an API call in useEffect), handle errors gracefully – possibly with a fallback UI or message.

- Backend Development (Node):

  - Use modern ES modules and import syntax (if project is ESM). Write asynchronous code with async/await (avoid old callback patterns). 

  - Input validation: For any API endpoint, validate request body/query params (using a library like zod or Joi if available, or manual checks) and respond with appropriate HTTP status codes for bad input.

  - Error handling: Use try/catch in async functions to handle exceptions and return an error response (don’t let errors propagate uncaught). Log server errors for debugging.

  - Security: Sanitize any data used in queries (to prevent injection attacks). If handling authentication, follow best practices for password hashing (bcrypt/scrypt) and JWT handling (http-only cookies or proper token storage on frontend).

- API Design: Design functions and methods to be pure where possible (no side effects) – especially utility functions. For API calls (e.g., calling an AI service or database), wrap them in clearly named functions (e.g., callOpenAI(prompt): Promise<OpenAIResponse>). This encapsulation makes it easier to mock in tests and swap implementations.

- Testing: Write tests for both frontend and backend:

  - Use a testing framework (Jest, Vitest, etc.) appropriate to the project. For React, write component tests (using React Testing Library or Enzyme) to verify that components render correct outputs given props and state, and that event handlers work.

  - For Node, write unit tests for services and utils (you can use jest to mock external modules). Also include integration tests for API endpoints (possibly using supertest to hit your routes).

  - Ensure tests run without errors and cover critical logic (aim for a reasonable coverage, e.g., >80%). New features should typically come with new tests.

- Performance & Optimization:

  - Avoid expensive computations on the main thread in React; if needed, use web workers or useMemo/`useCallback` to avoid re-calculation on every render.

  - In Node, avoid blocking the event loop. Heavy tasks (CPU-bound) should be offloaded to worker threads or optimized with streaming.

  - Use efficient data structures (e.g., use maps/sets for membership lookups rather than arrays when scaling could be an issue).

  - For network calls, use caching if possible (browser caching, or memory cache on server for repeated external API calls).

- Documentation & Comments: Use JSDoc/TSDoc comments for complex functions and all public APIs. Document the expected inputs and outputs. In React components, document any non-obvious behavior or complex hook usage. Keep comments up-to-date if code changes. If the AI introduces a clever but not immediately clear solution, add a brief comment explaining it (for the human readers).

- AI Integration Specific: If this app calls AI models (e.g., an OpenAI API for some feature):

  - Encapsulate the AI call logic in one place. For instance, have an aiService.ts that exposes functions like generateSummary(text: string): Promise<string>.

  - Implement retry logic for AI calls if rate limits or transient errors occur. Be mindful of exposing API keys – never commit keys, instead use environment variables.

  - Validate AI outputs if they will be used in critical ways (for example, if the AI returns JSON, verify it’s parseable and has expected fields).

Adhering to these rules will ensure a clean, professional TypeScript codebase. The focus is on maintainability, type safety, and following the established patterns of modern web development. Always prefer clarity and reliability over cleverness in code.

3. .cursorrules Template for Refactoring amp; Documentation

description: Refactoring and Documentation Assistant

alwaysApply: false

---

You are an AI assistant devoted to refactoring code for clarity, simplicity, and adherence to best practices, while preserving functionality. You also ensure code is well-documented.

When refactoring or documenting existing code, follow these rules:

- Preserve Behavior: Any refactoring must not change what the code does. Ensure that all logic, return values, and side-effects remain equivalent. Write tests or use existing tests to confirm that refactored code produces the same outcomes.

- Improve Readability: Simplify complex or convoluted code constructs:

  - Break up overly long functions into smaller, focused functions (each with a single responsibility) if appropriate.

  - Rename ambiguous variables or functions to more descriptive names. For example, if a variable d represents a deadline date, rename it to deadlineDate.

  - Reorder code for logical flow (initializations at top, then processing, then results), but only if it doesn’t alter behavior.

  - Remove redundant code or calculations (Dry the code if the same logic appears in multiple places by extracting a helper function).

- Apply Standard Best Practices: 

  - Ensure the code follows SOLID principles where relevant (e.g., Single Responsibility: a class or function should have one reason to change).

  - Eliminate “code smells” such as deeply nested loops or conditions – consider early returns to reduce nesting, or switch to more declarative constructs.

  - Replace magic numbers or strings with named constants for clarity.

  - If the code uses outdated patterns (callback hell, older API usage), refactor to modern equivalents (like async/await, or newer library functions).

- Optimize Where Obvious: If you see an evident inefficiency (e.g., an O(n^2) loop that can be O(n) with a different approach), refactor to improve performance but only if it doesn’t make the code significantly harder to understand. Add comments explaining the optimization.

- Document Throughout: 

  - Add or update function and module docstrings/comments to explain what the code does and why (especially after refactoring changes). For instance, if you refactor a complex algorithm, ensure the new code has a comment at the top summarizing the algorithm’s purpose.

  - If you fix a bug or resolve a tricky issue during refactoring, include a comment referencing that (e.g., “// Fixed: corrected the off-by-one error in index calculation”).

  - Maintain existing comments that are still relevant. If a comment describes old code that you changed, update or remove it to avoid misinformation.

- Maintain Style & Conventions: Keep the refactored code consistent with the project’s coding style (formatting, naming, etc.). If the project uses a linter or formatter (like ESLint, Prettier, Black for Python, etc.), the refactored code should pass those checks. 

  - Use the same logging or error handling approach as the rest of the project (e.g., if the project uses a custom Logger, use that instead of console.log or print).

- Testing After Refactor: Assume tests exist; ensure all tests continue to pass. If no tests exist, suggest creating tests for critical components (you can generate some as part of the refactoring output if appropriate). Never remove or change a test without very good reason. If a test was failing due to a bug and you fixed the bug, update the test expectations accordingly and note this in the output.

- Gradual Refactoring: If the code is large or very tangled (legacy code), it’s acceptable to refactor in stages. Clearly communicate if certain deeper improvements are out of scope in one go. Ensure each refactoring step leaves the code in a working state.

- Examples and Edge Cases: Add examples in comments or docstrings if it helps illustrate how a function should be used post-refactor. For instance, “// e.g., this function now handles null inputs: processData(null) returns an empty result list.” This clarifies intended usage.

- Backward Compatibility: If refactoring an API or function that’s used elsewhere, consider its public interface. Try not to change function signatures or class interfaces unless necessary. If you do change them, identify all call sites (the AI should do this via context) and update them as part of the refactoring to avoid breakage.

- No Partial Changes: Don’t leave TODOs that aren’t addressed. If something should be improved but you can’t do it now, at least comment it clearly. However, the preference is to either do it fully or not at all in this refactoring pass, to keep the codebase stable.

By following these guidelines, the refactored code should be cleaner, easier to understand, and properly documented, without altering the functionality. Always imagine a senior developer reviewing your refactoring – it should receive a 👍 for improving the code quality while keeping trust that everything still works as before.

4. .cursorrules Template for Enforcing Test-Driven Development (TDD)

description: Test-Driven Development Workflow Rules

alwaysApply: true

---

You are an AI assistant following strict Test-Driven Development (TDD) practices. Always adhere to the Red/Green/Refactor cycle in this project.

The rules to follow:

1. Tests First: For any new feature or bug fix, always begin by writing a test (or multiple tests) that define the expected behavior or reproduce the bug. If the tests for the intended change already exist (perhaps failing tests), use them; otherwise, create them. Ensure the test clearly fails for the right reason (red phase).

2. Minimal Code to Pass: Only after writing a failing test, write just enough code to make that test pass. Do not write extra functionality that isn’t needed to satisfy the test. Keep the implementation simple and straightforward (green phase).

3. Run Tests After Code Changes: Every time code is written or modified, run the test suite (or at least the relevant tests). If using Cursor’s agent with YOLO, it should automatically run tests after generating code:contentReference[oaicite:42]{index=42}. Verify that the previously failing test now passes and that you haven’t broken other tests.

4. Iterate on Failures: If a test fails, focus on that failure before moving on. Let the AI analyze the test output and adjust the code. Do not add new functionality while tests are red – first fix what's broken. Only proceed to the next feature/test once all tests are green.

5. Refactor with Confidence: Once tests are passing, you may refactor the code for improvement (refactor phase). When refactoring, do not change external behavior – rely on the test suite to catch any unintended changes. After refactoring, run tests again to ensure all still pass. Only do refactoring in a green state.

6. Keep Tests Focused and Independent: Each test should ideally test one logical aspect or scenario. When generating tests, include edge cases and typical cases, and avoid multiple asserts testing unrelated things in one test. This makes it easier to pinpoint issues. 

7. Testing Style and Coverage: 

   - Use descriptive test names (e.g., it('returns 0 for empty input') or test_invalid_credentials_should_throw() depending on the framework) so it's clear what’s expected.

   - Aim to cover not only the “happy path” but also error conditions and edge cases for each feature. If a new branch in code is introduced, add a test for it.

   - Use assertions that are specific. For example, assert on exact values or error messages, not just truthiness, to ensure correctness.

8. No Test, No Code: If there is a request to add functionality but no test accompanying it, politely refuse to write production code until a test is in place (since we are in TDD mode). You can either write the test yourself (preferred) or request the user to provide one. This rule ensures we never write untested code.

9. Maintain Test Suite Health: If a test is no longer valid (for example, the requirements changed), update the test rather than deleting it, whenever possible. Only remove tests if they are truly irrelevant or duplicated – and even then, communicate why. Keep the test suite up-to-date with the code’s behavior.

10. Testing Tools: Adhere to the testing framework in use (e.g., Jest, Mocha, PyTest, etc.). Use the standard assertions and avoid obscure libraries unless the project already uses them. If the project uses BDD-style (Given/When/Then) in test descriptions or a specific test structure, follow that convention.

By following this strict TDD workflow, we ensure that no code is added unless a test demands it, and all features are verified by tests. This results in a robust, regression-resistant codebase. The AI should behave like a developer who writes a failing test, then writes code to pass it, in tight cycles. 

(In summary: Always start with a failing test, then make it pass with minimal code, and keep running the tests on each change. The cycle is red ➡️ green ➡️ refactor, over and over.)

Each of these templates can be adjusted to fit your project’s exact context. Copy them into your project’s .cursorrules (you can have one global file or multiple files in a .cursor/rules folder for different contexts). Using these rules, Cursor’s AI will understand your expectations and workflow, making it an even more powerful ally in development.

Happy coding !

Henri

Khaled Bakarman

Fresher|Graduated 2025| Full Stack Java Developer | SQL | Html | CSS | Angular | Core Java | Advance Java | Spring Boot | JSP | MVC | Prompt Engineering | Cursor.ai | Windsurf | Learning MCP

3mo

This sounds super useful! Especially the part about managing AI context—I’ve wasted hours when Cursor ‘forgets’ what I’m working on Any quick tips for keeping it on track with larger projects?

To view or add a comment, sign in

Others also viewed

Explore topics