Yes we switched to plain text
You mentioned previously that is when switching models, literally I changed nothing, still using the YAML on Claude as i was before.
Without seeing more details of how you run it, with what prompt etc. its tricky to state the actual cause. The biggest impact is when changing models but i did not go yet into other influences and issues this approach causes just by itself.
AI is not like a calculation where 2 + 2 always results in 4. Besides the model having variations in output, there are likely also differences from:
- Interference provider: who runs the model and offers the API, they tweak often model usage, behavior etc. Their customers like Cursor have no real insight in what the provider changes and they do make various changes frequently.
- Cursor: they launched several changes in last few days, from new models being used internally (that do the ‘magic’ that makes Cursor so great, behind the scenes) and publicly (prompt adjustments/guidance, passing files/docu/… into prompt. But also several editor updates and new capabilities. all this influences model behavior.
Overall, there is NO logic why yaml should work better than plain text, its most just wasting tokens on spaces/indentation.
Overcomplicating rules with Yaml is just unnecessary.
Thank you, I guess what I am struggling with as I have said this previously is that we have this amazing tool that can do pretty awesome things, but no one seems to know how to use it for a consistent outcome. 1 persons opinion contradicts another and another. I am yet to see anyone from cursor comment on it, given it is their product. Yes they are reliant on the model providers etc, but they still have to tell us how to use this produc and the ‘features’
I get that 2 + 2 wont always = 4, but surely the cursor team have the logic to how rules/prompts should be used. Having user have to ask in a forum on how to use rules is just silly.
You mention “Overcomplicating rules with Yaml is just unnecessary” - we have now got varying ways to apply rules:
-
- Rules for ai which I use now with YAML - Positive impact up until today.
-
- .cursorrules - tried some of the cursor rules directory ones and never had any luck or positive outcome.so removed them.
-
- Project Rules - again absolutely lost how this would even differ to cursorrules but figure they would also be ignored - tested and did confirm they also get ignored.
YAML/XML/TXT argument aside - Surely someone somewhere on this earth can explain in simple terms what is best practice in this situation forgoing being able to pin down any concrete way to use cursor.
I feel like i am missing a big point to be ale to use Cursor, other than AI is unpredictable.
Hope this makes sense?
You’ve hit the nail on the head here and sadly no one has a solution for this. We’re in this crazed phase of AI development where not even the people building the LLMs really understand everything about how they work, or why sometimes they’re just quirky and hallucinate, etc. Instead of spending time figuring that out, they’re working at breakneck speed to just make them better, faster, smarter.
The same effect applies for developer tools like Cursor. It’s all advancing too fast for anyone to really have a handle on it; by the time you figure out one piece of it, technology has advanced and what you figured out probably isn’t relevant anymore.
Then if you factor in how much the way you prompt AI matters… it’s a recipe for chaos. It’s been shown that even minor things like small punctuation or grammatical errors can drastically affect the output of the LLM. If you’re mean and talk down to the AI, or give it criticism that isn’t carefully constructive, it will start performing worse. Even if you’re perfectly consistent and feed it the exact same prompt multiple times in different sessions, the output will vary.
My advice is just try to hang on for this crazy ride we’re all on. Don’t get caught up on all the “XYZ best practices” everyone else is doing, just try to figure out some things that work good enough for your use case and stick to them. I think at some point things will start to plateau and stabilize but we’re not anywhere close to that yet.
Interesting, I’ve heard the opposite argument that having some structure helps the LLM better parse and understand the information. That XML is the absolute best for LLM understanding, but it’s obviously very wasteful of tokens and not very human friendly.
Cosidering that YAML is a good middle ground. It’s sure as heck easier for me to understand and maintain it in YAML than plain text, maybe it does “waste” a few tokens but not enough to matter for my use case.
I honestly decided to stop caring about it and created the following definition of rules for the global project. Cursor rules and for ai rules of that project itself ready so my sanity works
Can you expand on what you mean please?
I think that’s about it, the reason why you’re not finding any definitive prescriptive documentation.
Because AI coding results are not deterministic, results are going to vary. What works great for one person one day, might be completely counter-productive for someone else on another day.
It’s like we are all on this crazy bus ride together, and everyone’s drawing their own map, but no one really knows for sure.
TRY THIS GUYS! it’s working on me maybe on you too: