Yes we switched to plain text
You mentioned previously that is when switching models, literally I changed nothing, still using the YAML on Claude as i was before.
Without seeing more details of how you run it, with what prompt etc. its tricky to state the actual cause. The biggest impact is when changing models but i did not go yet into other influences and issues this approach causes just by itself.
AI is not like a calculation where 2 + 2 always results in 4. Besides the model having variations in output, there are likely also differences from:
- Interference provider: who runs the model and offers the API, they tweak often model usage, behavior etc. Their customers like Cursor have no real insight in what the provider changes and they do make various changes frequently.
- Cursor: they launched several changes in last few days, from new models being used internally (that do the ‘magic’ that makes Cursor so great, behind the scenes) and publicly (prompt adjustments/guidance, passing files/docu/… into prompt. But also several editor updates and new capabilities. all this influences model behavior.
Overall, there is NO logic why yaml should work better than plain text, its most just wasting tokens on spaces/indentation.
Overcomplicating rules with Yaml is just unnecessary.
Thank you, I guess what I am struggling with as I have said this previously is that we have this amazing tool that can do pretty awesome things, but no one seems to know how to use it for a consistent outcome. 1 persons opinion contradicts another and another. I am yet to see anyone from cursor comment on it, given it is their product. Yes they are reliant on the model providers etc, but they still have to tell us how to use this produc and the ‘features’
I get that 2 + 2 wont always = 4, but surely the cursor team have the logic to how rules/prompts should be used. Having user have to ask in a forum on how to use rules is just silly.
You mention “Overcomplicating rules with Yaml is just unnecessary” - we have now got varying ways to apply rules:
-
- Rules for ai which I use now with YAML - Positive impact up until today.
-
- .cursorrules - tried some of the cursor rules directory ones and never had any luck or positive outcome.so removed them.
-
- Project Rules - again absolutely lost how this would even differ to cursorrules but figure they would also be ignored - tested and did confirm they also get ignored.
YAML/XML/TXT argument aside - Surely someone somewhere on this earth can explain in simple terms what is best practice in this situation forgoing being able to pin down any concrete way to use cursor.
I feel like i am missing a big point to be ale to use Cursor, other than AI is unpredictable.
Hope this makes sense?
You’ve hit the nail on the head here and sadly no one has a solution for this. We’re in this crazed phase of AI development where not even the people building the LLMs really understand everything about how they work, or why sometimes they’re just quirky and hallucinate, etc. Instead of spending time figuring that out, they’re working at breakneck speed to just make them better, faster, smarter.
The same effect applies for developer tools like Cursor. It’s all advancing too fast for anyone to really have a handle on it; by the time you figure out one piece of it, technology has advanced and what you figured out probably isn’t relevant anymore.
Then if you factor in how much the way you prompt AI matters… it’s a recipe for chaos. It’s been shown that even minor things like small punctuation or grammatical errors can drastically affect the output of the LLM. If you’re mean and talk down to the AI, or give it criticism that isn’t carefully constructive, it will start performing worse. Even if you’re perfectly consistent and feed it the exact same prompt multiple times in different sessions, the output will vary.
My advice is just try to hang on for this crazy ride we’re all on. Don’t get caught up on all the “XYZ best practices” everyone else is doing, just try to figure out some things that work good enough for your use case and stick to them. I think at some point things will start to plateau and stabilize but we’re not anywhere close to that yet.
Interesting, I’ve heard the opposite argument that having some structure helps the LLM better parse and understand the information. That XML is the absolute best for LLM understanding, but it’s obviously very wasteful of tokens and not very human friendly.
Cosidering that YAML is a good middle ground. It’s sure as heck easier for me to understand and maintain it in YAML than plain text, maybe it does “waste” a few tokens but not enough to matter for my use case.
I honestly decided to stop caring about it and created the following definition of rules for the global project. Cursor rules and for ai rules of that project itself ready so my sanity works
Can you expand on what you mean please?
I think that’s about it, the reason why you’re not finding any definitive prescriptive documentation.
Because AI coding results are not deterministic, results are going to vary. What works great for one person one day, might be completely counter-productive for someone else on another day.
It’s like we are all on this crazy bus ride together, and everyone’s drawing their own map, but no one really knows for sure.
TRY THIS GUYS! it’s working on me maybe on you too:
I think it’s intended that .cursorrules
will go away entirely and that all things Cursor, not limited to rules (e.g. MCP configuration files), will live under that folder. So it’s less a place to put multiple rules and more a namespace mechanism to keep various Cursor based configurations in one tidy spot.
First off, great discussion.
A bit out of left field, but has anyone used the rules to point to, effectively, other rules? In my case, I want to have human readable .md files which specify preferred design patterns at the root of each directory where they apply. E.g. if I have a specific ways to do JavaScript, I’d prefer to have a human readable guide at the top of the app/javascript
folder. Similarly, for Ruby “services”, I’d have a human readable .md at the root of our app/services
directory.
I’m probably going to try this, but I’m curious if anyone has done this or something similar at scale. Say, 10 other human readable files that could double as rules and referenced from the designated .cursorrules
or .cursor/rules
file.
lol the best rules are indeed markdown. there is no need to overcomplicate things.
while you can reference any file in the chat/composer it does make sense to keep documentation in a central folder.
One thing I think is also missing from the rules definition is a fallback set you can keep in your home directory, in say ~/.cursor/rules/[ext].md
- it’s annoying to have to keep language-specific rules in each repo you have. I’d rather keeping high-level programming rules in my home directory, and keep the repo-level rules to things that are specific to that particular repo.
I’m not sure if a repo-level rule would override or add to the global rules, but I think this is a concept that Cursor should explore. I’d like to be able to commit mine to a personal dotfiles repo which I could easily access between my work and personal laptops.
Another option could be that these global rules could be stored in your Cursor account, and fetched at the start of each new chat session.
Like OP, I’m also frustrated at the lack of clarity in documentation. I understand that an LLM can’t be forced to obey every single rule exactly, but the Cursor docs could at least attempt to clear up some ambiguity about how we SHOULD attempt to implement these rules. It’s the most common thing that mine gets wrong - it just ends up ignoring rules in .cursorrules
(I’m yet to convert to the new format) and even direct requests in chat, and ends up just doing its own thing.
If I could at least know that I’ve configured the rules as the Cursor devs intended, then in those situations at least I’ll know that it’s the LLM that’s got it wrong, not me.