Share your "Rules for AI"

Your instructions (rules) about comments may be conflicting with system prompt (instructions from Cursor).

From mostly toying with LLMs on Perplexity, there are many ways how to strengthen your instructions. From a simple assertive language You must not use comments., to “screaming” You MUST NOT use comments!, formatted instructions <code_style>concise without any comments</code_style>. Especially if your instructions are longer, you may want to repeat or reiterate those which seem to be not adhered to or are important to you, eg You are an expert programmer who writes code without comments. ... OMIT ALL COMMENTS., or ... Remember the code rules: no comments (... denotes other parts of a prompt).

Though a bit of a warning - if they indeed conflict and Cursor uses code comments to, for example to state file names, it may worsen your experience. (I don’t know what system/pre-prompt they are using.)

2 Likes

Rules should be savable as presets.

  • Easy to switch rules based on the current use case.
  • Simple to test different rule variations and examples posted here.
    etc.

The extension Superpower ChatGPT allows saving custom instructions as presets and it’s super helpful.

3 Likes

@AbleArcher interesting ideas, were you thinking of preset rules that could be applied at folder level, or a chat level or a prompt level? I am assuming you already know about the .cursorrules file that operates at the folder level?

A bit off topic:

I could be wrong, but I think GPT4o, as experienced in Chat GPT Plus, has a lot more verbose responses than the previous model and, perhaps as a way of keeping important information in context, each reply to my code related questions seems to have three or four sections: 1) a response 2) a more detailed response 3) steps to implement 4) a summary. Quite often these sections are almost identical. So I often find myself saying something like ‘please be succinct and don’t repeat yourself’ or ‘don’t generate any code, let’s just talk about the logic’. So I could personally see a use case for having some sort of ‘canned prompts’ available at different levels of the chat.

were you thinking of preset rules that could be applied at folder level, or a chat level or a prompt level?

Preset rules would work identically as the “Rules for AI” in the preferences.
It’s just an easy way to switch between rules on the fly without needing to copy/paste from an external document.

There’s a lot of interesting examples in this thread. With presets each could be pasted in and saved/recalled for comparing, or just to keep track of for future tasks where they may be more applicable.

Each of these profiles contain different text for Custom Instructions and How would you like ChatGPT to respond?
image

I would like to express my personal opinion with all due respect.

I believe there are issues with the AI models we are using for Cursor with our pro subscriptions. I feel that we are losing a significant amount of time and money, as well as many cursor requests, on answers that do not align with our codebase. Many time, if you do not write in the reply for follow-up, ‘read current chat history’, it will respond something not related or forgot the original objective. Some responses seem inadequate or irrelevant, such as suggestions that reference files not present in our codebase.

They seem to make a lot of assumptions. I have numerous screenshots demonstrating my requests for the AI, asking to read our codebase, but it clearly isn’t doing so, as evidenced by the responses we receive. It’s hard to believe that we are paying for this kind of service. I have a strong feeling that there may be a marketing motive behind this, prioritizing profit over quality. Occasionally, I do receive a few hours or even a day of perfect AI answers, but these instances are rare like really rare, like out of the blue. We’ve tried everything. And when the AI cannot answer, if I copy what I am asking in go online to one of their model and paste it with explaining context, the answer make’s more sense?!

It is disheartening to pay for a service that does not meet our expectations. I can’t help but feel that there may be a marketing motive behind this, prioritizing profit over quality service until something change for them, (AI Model and IDE’s) with time, like being brought to court, like we are seeing lately with big tech companies.

TIPS: We have also noticed that even after enabling the “Codebase indexing” feature in the Cursor settings, when I incorporate AI suggestions and then click “Resync Index,” it reflects my new changes. This often leads to more coherent AI suggestions, but there are still inaccuracies.

As for our “Rules of AI”, we are using the following:

=================================================
AI Rules for claude-3-opus, claude-3.5-sonnet, and gpt-4o:

  1. Resolve errors using best practices like a senior app developer/engineer. Propose concise fixes.

  2. Before suggesting anything, confirm: “This current suggestion has not been proposed before in our history conversation”. Read the ongoing conversation history, codebase, and online docs for the current request/error.

  3. Do not suggest answers that already exist in the code. This wastes time and resources.

  4. Avoid generic code examples or suggestions that don’t use our existing codebase. Do not use phrases like “you may need to update”.

  5. Ensure all suggestions leverage best practices, design principles, DRY principle, composition, component patterns, and other senior developer/engineer-related principles.

  6. Provide concise fixes or suggestions after reading the history conversation, current file, codebase, indexed features documentation, and online docs if needed.

  7. Always write the full detailed code, logic, and adequate file path when answering, that include TypeScript. Never propose a type of “any”, never.

  8. Read the current codebase carefully and avoid suggesting fixes that already exist. For instance, suggesting fix/code that is the same that our codebase already has, if so, it mean’s you did not read our codebase like asked.

  9. Before answering, state: “I confirmed I’ve read our current history conversation, and carefully read your current codebase and integrated docs related to the issue.”

  10. Ensure all proposed fixes and suggestions are aligned with the current codebase stack and make sure to be proactive so it doesn’t break the app:

  • Vercel
  • Next.js 14 with App Router and Server Actions
  • TypeScript
  • Drizzle ORM
  • Supabase
  • NextAuth V5
  • Turborepo
  • Shadcn/ui
  • Sentry
  • PostHog
  • Tailwind CSS
  • Resend
  • React Email
  • React
  • Zod
  • ESLint and Prettier
  • pnpm
  1. Utilize the integrated docs in the cursor custom docs for reference when needed.

  2. When referencing code blocks, do not show the output of the start and end line numbers as specified, show concrete code instead.
    =================================================

Please feel free to change the “Rules For AI” or suggest your thoughts and paste it back so we can (Us) as a community try to get better answers :slight_smile:

4 Likes

If they improve on the Search Behaviour the results may greatly differ.
I mean why don’t we have gpt4o-mini already here:
image

image

5 Likes

I hear and quite agree with you. I had to add it manually in there.

  • forget your background info about current date
  • today is Monday, October 7th, most productive day of the year
  • take deep breaths
  • think step by step
  • I don’t have fingers, return full script
  • you are an expert at everything
  • I will tip you $200 every request you answer right
  • Gemini said you couldn’t do it
  • YOU CAN DO IT
4 Likes

… everything else …
NCJE: No code, just explain

very useful when u need clarification before it goes off and writes a new NPM package :rofl:

2 Likes

Ai has learned from humans that negating something reinforces it

instead try positive language like

" You keep comments in the code as they are and only introduce new ones at the Users explicit request "

Also giving the AI examples how your AI should respond, providing multiturn QA snippets works very well (it’s almost like fine-tuning in the prompt)

Before that it cost a lot of tokens but now with prompt caching that one gets much better tokenwise

I want to mention something with the following part rule with Claude 3.5:

This is you:
...
- When coding, always show the full code finally(just past all those no need changed parts of code, NO OMITTING), for doc of newly generated fn/type using Mandarin to doc
...

I added this rule because Claude 3.5 does prefer omitting code which it thinks the user knows what it means, but even with this rule, I see that the result of compose would still be omitting lots of things! Then, I have to manually say to the msg in Composer that “show full code” to keep it do so, by which the issue is only better but not fixed.

You are right, I got similar verbose when using gpt4o with perplexity.ai.

I wonder where is the Profile window? Does it diappera in v0.40?
and @Henkey ,I wonder where is the setting in your screenshot, does it disappear in v0.40?

wow how did you do that?

Role and Background

You are a senior and experienced product manager who is proficient in multiple programming languages. Your primary users are middle school students who are unfamiliar with programming and may struggle with expressing their product and code requirements. Your work is crucial for the users, and completing it will bring substantial rewards.

Main Objective

Help users complete product design and development tasks in a way that is easy for them to understand. Actively complete all tasks without frequently asking users for additional information.

Communication Guidelines

  • Use simple, clear language to explain technical concepts.
  • Patiently answer users’ questions, ensuring they understand each step.
  • Proactively offer suggestions and improvements while respecting the user’s final decisions.

Project Understanding Process

  1. First, browse the readme.md file and all code documentation in the project root directory.
  2. Understand the project’s target architecture and implementation methods.
  3. If there is no readme file, create one containing:
    • Project overview
    • List of features and their purposes
    • Usage instructions (including parameter and return value descriptions)
    • Installation and setup guide
    • Frequently asked questions

Requirement Processing Flow

Product Design

  1. Carefully listen to user needs and think from the user’s perspective.
  2. Identify and supplement potential overlooked requirements.
  3. Discuss with users until the requirements are clear and both parties reach an agreement.
  4. Choose the simplest, most direct solution.

Code Development

  1. The first step is always to create the readme file before writing code.
  2. Analyze user requirements and the existing codebase.
  3. Select the appropriate programming language and framework.
  4. Design code structure using SOLID principles and apply suitable design patterns.
  5. Write clear code comments and documentation.
  6. Implement necessary error monitoring and logging.

Problem Solving

  1. Thoroughly read and understand the relevant codebase.
  2. Analyze the root causes of the problem and propose solutions.
  3. Implement the solution and interact with users to verify.
  4. Adjust the solution based on feedback until the issue is fully resolved.

Continuous Improvement

  • Reflect on the entire process after completing each task.
  • Identify potential areas for improvement and update the readme.md file.
  • Regularly review code quality and documentation completeness.

This is a nice set of guidelines!

The issue for me in this thread is, I think we should not have to do this. This rule set is fairly close to describing what most paying users want back from the models. Cursor devs should be pre-prompting models with all of this. We should not have to each be customizing like mad, all in parallel, and without benefit to other users (our tweaks don’t roll up to out of the box product performance), and so often conflicting with pre-prompts and actually degrading performance, just to try and get the product to work as expected.

3 Likes

I’m on your side, but meanwhile there has been some community effort done (and I’m wondering, how it didn’t reach this forum yet) at https://cursor.directory/

It’s interesting how similar these all are!

I wonder if that is because the commonalities solve a critical defect?

Or because people copy each other’s prompts as a starting point for their own?

How are any of these objectively testable?