Agents respond very late in slow mode, stop in the middle of the process and most importantly, they are not smart like fast requests. They act like idiots. I also think that the demand runs out very quickly.
I wonder if the claim that the agents are dumber in slow mode isn’t just biases from our part. If I wait longer for something, my expectations for that thing get higher, so it’s easier to get frustrated with the inability of the model to accomplish a request.
That being said, I also feel like they get dumber and I wonder if this is because the cursor prompt is instructing the models to be more concise and less verbosity = less thinking = dumber. I would be very surprised if the instructions on cursor behalf would include things like “act dumb” or “be less capable”, that seems very unethical. I hope that’s not the case.
for 2 days cannot solve problem and getting more problem
That’s right if you use 3.7, Cursor’s system prompt forces the model to respond concisely, only giving the result and no further explanation. They even use “2+2=4” as an example for the model, wtf.
Thanks, I wonder if by getting the leaked prompts (shaky evidence, I know) and by complaining a lot in the forums we can change this situation. I much rather wait longer in the pool and have the full response without errors (because it isn’t in concise mode) than get frustrated with the lack of the capabilities of the models. The way I see it, for most people (maybe not enterprise), the main bottleneck is coding ability, not model speed or waiting time. Another evidence of this is companies investing in functionalities that takes longer but deliver better quality, like “deep research” or even the “o-” series models from OpenAI.
To me, at least, it’s evident that users wants quality over speed, so I don’t see the reason behind prioritizing conciseness (to a degree in which it degrades performance) over api costs. Maybe Cursor as a company is trying to change their userbase to vibe coders/builders that cannot distinguish between good and bad good, but even that group eventually get’s frustrated when they see the code just seemed to work and now they’re stuck with a useless pile of code they don’t understand and doesn’t do what they actually want.
So yeah, I don’t see it, some clarification on the team’s behalf would be appreciated @deanrie @danperks (marked the most active moderators on the last week, sorry if this get’s in the way of any automated routine for checking complaints, I’m a frustrated user and I think I explained why above and I suspect the company will get a lot of us if these problems keep happening :/, just trying to help
Thanks for sharing the feedback about the slow mode experience.
I want to clarify a few things:
Cursor’s models are not “dumbed down” in slow mode - they use the same underlying capabilities as fast requests - the main difference is request prioritization and queuing.
As you mention, there could be a case of it being perceived as worse due to you having to wait for the response!
For your specific situation - if you’ve been stuck on a problem for 2 days, you might want to try:
- Breaking down the problem into smaller chunks
- Plan out the change with Ask, and implement it bit by bit
- Checking out our best practices guide: Cursor – Working with Context