Cursor prompt engineering best practices

most of the time cursor does a pretty good job of understanding the context and content of the different embedded references, at least with gpt-4. the purpose of this post is to unearth best practices for prompt engineering that are specific to cursor.

for example, let’s say that i have documentation, a style guide, and an example. what are the best practices for labeling those @ differently so that the llm understands the appropriate context? i’ve been doing something like this:

using the style guide @style-guide, and following the example @example, write a @Hugo layout template that does …

any thoughts on this approach? (as additional nuance, this might return very different results with or without the “full codebase context” (cmd + return))

any cursor-specific prompting tricks that others are willing to share?

(now is the time for the grimoires)

1 Like

This is not a prompt guide as such, but I wanted to understand the changes that are being done to a particular Object so I saved the log of it and passed it as file and asked gpt-4 to give me the dummy value of this object considering this as the starting point. This helped me better understand the code

@raw.works After a year now, I’m wondering what you’ve learned, if anything, to improve your prompting.

success with cursor is all about context management.

@codebase” is pretty risky. your basically admitting that you have no idea what is important for the AI to pay attention to, so you’re rolling the dice that the cursor reranker is going to get it right for you. this could be ok if know what to search for, like the name of a function, but i try not to use @codebase without also adding some specific files .

so if you can do things like “this is the backend @/backend.py, this is the frontend @/frontend.js, now go do ____” - you’re going to be way better off than just “@codebase, go do ____”.

also - for frameworks that aren’t likely to have a lot of examples in the training (ie, they are new or rare) - then i would recommend using cursor to ingest the docs and come up with advice, which you then put in .cursorrules

  1. add the new docs as custom doc
  2. tell the AI to generate the key things to remember about this framework.
  3. then tell it to write those to .cursorrules voila. AI teaching AI. (learned the hard way after constantly reminding sonnet to re-read the svelte 5 docs)

see my tweet with this advice: x.com

3 Likes

Ah, I have been just @ mentioning the docs every time, but I like this approach. Will give it a try.
Thanks for the detailed response, very helpful.

1 Like

this is clever, nice one!

related to the above - one of the most common annoyances that i experience (in particular with composer mode) is that the AI re-writes methods that are already available.

for example, i have a utility.ts with all of the methods and auth to call some API. making sure to remind cursor: “first see if there is already a method in @utility.ts before attempting to write any new methods” can save a lot of time and a lot of “no, don’t rewrite that when there is already a function for that you idiot”.

i haven’t played much with “notebooks”, but it seems to me that this would be a better use case for a notebook (with the @ references) than a .cursorrules (which i don’t believe can support @ references).

so i’m thinking i should make one notebook with all of the:

the backend is @/backend
the frontend is @/frontend
the functions you need are probably in @/utility.ts

then that could get re-used in chat or composer…

1 Like

this is a great use case for notepads. i still have to write docs on this, but the gist of it is

  • share context between composers
  • write code, @-references and attach files to them

curious to hear when you’ve tried it out

2 Likes

Sur cursor l’IA est vraiment bien renseignée sur le fichier avec le CTRL + I avec l’ensemble des fichiers mais il faut tout de même un bon prompt et amener les bons fichier pour améliorer le résultat donc je dirai que l’approche est pas mal avec les allias

J’ai fais pas mal de test avec Claude, gpt 4, cursor small, gpt 4-o mini et si le prompt est optimisé, le résultat est présent à 80% du temps

Il faut quand même avoir un peu de base pour pas laisser l’IA sans instructions

Je fais avec Next Js et perso les instructions sont nécessaires même avec un bon prompt parfois

this is next level: