great list. one thing i’d add under rules: make them specific and testable. vague rules like “write clean code” get ignored. rules that describe a concrete pattern work way better.
for example, here are a few React/Next.js rules i’ve been using that actually changed cursor’s output:
error boundaries:
Always create error.tsx alongside every page.tsx.
error.tsx must be a client component ('use client').
Include a retry button that calls reset().
useEffect guard:
Before writing a useEffect, check if the value can be
computed during render instead. Prefer useMemo or
derived state over effects for computed values.
loading states:
Every async data fetch must have three states:
loading, error, and success. Use loading.tsx for
page-level loading and Suspense boundaries for
component-level.
the pattern is: tell cursor what to do in a situation it will actually encounter, not abstract principles. the more concrete the rule, the more reliably it fires.
also worth noting: if you’re using the newer .cursor/rules/ system, you can scope rules to specific file types with globs. so a React rule only triggers on .tsx files instead of cluttering every conversation.
i wrote more about which rules actually work vs which ones don’t here if anyone’s interested.
There seems to be a size limit at which Cursor just can’t cope with a task. e.g. refactoring many files to meet a different coding standard.
But if you get it to first make a markdown file with the list of files that need to be processed and have a checkbox AT THE START OF EACH LINE, and get it to work through the files, it works better.
The start of each line seems important….another time cursor had defaulted to using utf8 ticks at the end of the line, and it just really didn’t work very well at all.
Whenever you see cursor attempt to invoke tools incorrectly, interrupt it and write how to invoke that tool correctly in a document.
In particular, Cursor seemed to struggle a bit invoking some PHP testing tools correctly. Giving it a document that tells it how to run them correctly will avoid it having to find that out from scratch each time an agent is running.
I’d also strongly recommend creating a command to review the documents and commands that exist. They will drift out of sync, or otherwise have errors introduced. Manually reviewing them every so often stop them getting too bad.