And then will go on to start editing the cursor rules, sometimes to a point where I ask it to please continue where we left off and the agent is like “Yes, how can I help?”
I know how .cursorrules is supposed to work and this isn’t it.
The model is faking understanding there. It looks like you hit the auto summarizer that Cursor puts in there, and then tells the model to repeat the last message again and continue as if nothing has changed.
It looks like the model is misunderstanding the role of Cursor Rules and is treating them like something it needs to fix.
I’ve not seen a report of this before, so my initial guess would be to confirm the wording of your rules, in case they may be leading the LLM astray.
Looks like probably 3.7 thinking based on the second pic. Ya agree this must be at least partially due to the content of the rules (especially given the names of the rules listed and the model’s summary of them).
However, the way the model switches up so quickly after reading/writing a file (and the way it says “Let me analyze the task at hand”) is still a bit odd. For auto-attached rules, do they get injected into the bottom of the message context or above the first message like in the system prompt? Because if they are being injected at the bottom, like immediately above or below the file that got read/edited by the model, I could see that making the issue much worse and making it far more likely that the model will interpret those types of rules as a new request/task to complete.