Has any else had issues with Cursor cloud agents trying to do a ridiculous amount of tests.
Like where it opens a browser and tries to record something.
When I ask it to do something, if I don’t check up on it, and watch it closely sometimes it just starts doing WAY too many tests. I asked it to fix a bug and left the house, and when I came back hours later it was still doing new tests of the same thing, it had ran 45 tests!
The only way to stop it from doing this is by using /no-test and as such, this often makes the point of tests, a major reason I think cursor is quite unique, null and void, as half the time it does not work.
This is so frustrating for what seems like a simple solution.
I’m not sure what else I can do, or if anyone else has had these issues.
Hey, thanks for the report. That does sound annoying, it basically removes one of the main benefits of cloud agents.
A couple things that might help:
Set up agents.md / skills. You can give the agent clear testing instructions like how many times it should run tests at most, when to consider the task done, and when it shouldn’t re-check. Docs: Cloud Agent Best Practices | Cursor Docs
Prompt. Try explicitly saying something like “run the test once to verify, do not retry more than 2 times”. Without limits, the agent can get stuck in a loop if something doesn’t pass by its standards.
A few questions to help diagnose:
What kind of project is it (framework, frontend or backend)?
Do you have agents.md or skills set up with testing instructions?
Can you share an example request where this happens?