Cursor – Where is my control over AI? I demand the return of working rules and meaningful, user-centered interactions!

After the latest update, Cursor behaves like a fully autonomous CLI tool and has stopped following user-defined rules, even when they are explicitly written in the project or profile. The agent simply ignores the human: it formulates tasks for itself, executes them on its own, and in fact often just imitates the workflow, getting stuck in endless loops with no regard for the user’s actual goals or final results.

I had created a set of rules with their core purpose being a discussion of the problem before doing anything else—ensuring the agent truly understands me, and then only executing the tasks that align with this clarified goal. But now, Cursor simply disregards these rules. The new Plan mode has been introduced, but how it works is completely unclear. It operates automatically, based on some internal logic unknown to me—logic that could change again tomorrow. This simply does not solve the real issue.

The biggest problem with Cursor is that it endlessly generates excessive amounts of code, but routinely fails at the most basic tasks, stumbling over simple scenarios. Even worse: it does not really “see” the outcome. Many tasks are meant to be tested outside Cursor, because only the user can judge whether the result is truly correct. Any attempt to resolve a problem just leads to more and more code being generated—and the problems get worse, not better. We find ourselves trapped in a cycle, where Cursor mechanically tries to “fix” things over and over. Sure, it ends up with compiling code, but it doesn’t even realize that the original problem wasn’t solved. The resulting code is ultimately meaningless—there’s no point even trying to make sense of it.

The new update fundamentally changes how Cursor works. How often will we face new, unpredictable behaviors? How can users possibly figure out how to work with this tool when it keeps changing underfoot? Instead of letting users customize Cursor and adapt it to be truly helpful, the product just accumulates more and more automatic overlays and restrictions—which ultimately kills any real interest in AI. You simply can’t work productively in endless cycles of code you can’t even hope to review—because you’re a human, not an AI.

Requests and tasks for the Cursor team:

  1. Ensure that user rules always work—regardless of updates, versions, or agent mode. Please don’t force us to rebuild our workflow and relearn everything after each change.

  2. Stop making the AI “overconfident” and give users the final say on whether a task is actually completed as intended. The only thing Cursor can check reliably is whether the code compiles or not—but programming is so much more than that.

That’s not asking much, but without these basics, how can we actually do what we want, instead of being forced to do what the AI decides?

I have the same problem. After last update Cursor stopped following my custom rules and project-defined process.

I spend HUGE amount of time trying to return it on track.

AI cannot respect a simple rule.

I give it this rule frequently: Never use personal pronouns. Ever.

I apply the rule in different ways, with different implementations, following carefully all the suggestions of people.

Never has Cursor been able to respect this rule.

If anyone anywhere could ever write a rule that prevents the Cursor chat, agent whatever from ever using a personal pronoun, I would become an evangelist for the technology.

Instead, people continue to swear by this and it continues to grow in adoption, despite this basic, glaring failure.

I will use this technology begrudgingly, and I am bearish on it’s ability to provide value, until it can reliably (100%) respect a rule.

I respect several rules in life 100% (no murder, traffic laws, etc)….why can’t AI simply apply the rules I provide based on the directions provided?

I call bluff….

Prompt: count to 100

Models: Auto, gpt-5, Sonnet 4.5 thinking

I eventually got it to follow the rule:
CRITICAL: Never use first-person pronouns or human-like language. Responses should be objective and direct. Violations of this rule are unacceptable. Avoid human-like language patterns. Instead of 'I'll help you' say 'Here's the solution:' or 'The answer is:'. Instead of 'I think' say 'The approach is:' or 'Consider this:' Responses should be objective and direct. Violations of this rule are unacceptable.

Before:

After:

After trying a simple rule: never use personal pronouns which it failed with, I asked for it to verify the rules it knew of and then I asked for suggestions on how to make it better follow the ‘pronouns’ rule in the next chat. I kept updating the rule and testing by adding the suggestions, most did not work, but eventually through a combination it worked. Not sure what exactly did it, but it is possible.