Has anyone else noticed the agent mode asking permission to go ahead with work a lot more? It’s almost every request, meaning I need to use another request just to say “Yeah, go ahead” tried to mitigate it with a cursor rule but not even sure it takes the rules into account all the time.
I do tend to just use agent and automatic model selection, if I use a model like 3.7 Sonnet it doesn’t seem to ask as much because that model is so trigger happy anyway
I tend to just have Auto select on and not noticed any specific model, I must’ve used a good amount of requests just saying “Yes” to the agent to complete the work. Is it not a fair assumption that if it’s in agent mode and it’s only proposing one solution it should just complete that work?
It can just be a bit frustrating using a usage request just to tell the agent “Yes, do the work”
For example: I do not have that issue, but I also changed my prompts for newer models. If you provide more info it could be easier to see why it is happening in your case.
Its not that the issue does not exist, but rather than stating it doesnt work it would need a proper bug report to help the Cursor team analyse the issue and find the cause, or for us forum members to help you as well.
In some cases its the specific behavior the model was trained for. On hybrid thinking models like Claude 3.7 and Gemini 2.5 there are issues based on the hybrid reasoning that causes hallucinations but also with them expecting clearer prompts.
Asking the model to “How can I do …” or “Can you do …?” does not tell the model to do changes. Its better to clearly say “Change the code to…” as this instructs the model what to do.
Never answer a question from AI with Yes.
The answer should always be specific and clear: Change the code as discussed.
Or better: Apply the discussed changes until the task is complete.
If you gave the AI a task to do and it asked for confirmation that may happen when chat is long or the context has conflicting info in it, then AI gets confused about your priorities.
Was this helpful? Happy to discuss what else you could do to improve the handling.
My prompts are usually quite clear and it is frustrating to have to use a request to confirm I want an agent to do the work. I just expect it to do the work when it’s in agent mode, not ask for confirmation.
If the reponse from the LLM outlines the work it needs to do and asks if I want it to do that work, yes is fine…
I’ve used gemini and sonnet 3.7 explicitly most of the time and they don’t ask for confirmation, it only seems to be in the auto select (which I’ve just stopped using now)
Ok might be a bit confusing as your issue is about Auto model and not agent in general. But as overall there is not enough info provided for anyone to understand what is happening I wrote what is generally known.
I accept your claim that your prompts are clear, but without having seen one itws not possible to see what causes it right?
Yeah, it is confusing I agree. I was just posting to see if others had experienced it or whether I was losing my mind Thanks for your help, it may be something thats fixed down the line or may be prompting skill issues, time will tell. I am just going to explicitly use the Gemini/Sonnet 3.7 models for now
Oh holy ■■■■ the auto-select for models might be it. I’ve had sonnet explicitly set for ages and didn’t realize that auto-select had appeared and was allowing rando models to sneak in?!
If this works, thank you!!! Cautiously seems good so far
Quickly adding additional context in case needed, although tbh hopefully just manually re-setting sonnet fixes it.
Was pretty sure it’s not a rule/prompt issue since I have 4 cursor projects and it happens in all of them → 2 are newer, very clean .cursorrules
Basically went through:
settings → user rules
project rules/.cursorrules → also had the agent comb through multiple times to flag rules that it thought it was following for outlines
Definitely not chat/command specific, the agent very quickly reverts to this behavior
→ someone in another thread said outputting instructions into logs fixed it for them, but didn’t work for me consistently. But the model auto-selection issue actually would explain that well