Is it just me? Cursors needs many more requests than 1 month ago and delivers weaker quality

It’s been very frustrating. I am hobby coding and I am adding features which are pretty similar. I noticeably experience that since Cursor introduced the new cost model (I switched to legacy), that what comes back and how long it takes is significantly slowed down!

What do I mean?
I always give a it a laundry list (contextually close) to describe what needs done to get a feature implemented. In the past I was able to do this on average with 2-4 premium calls max. (sonnet-4). I haven’t changed my approach or the length of my story documents that I feed, but now I not only need at least 2-3x the time as before (that’s the worst part!, I know it was faster), I also need at least double the premium requests to get it shipped, on average 8-12 premium calls.

It feels really sneaky, I am not sure I’ll continue here and try other options.

So my question, do you experience the same?

4 Likes

我同意你的观点,感觉大不如前。

3 Likes

Agreed! I made a forum post after Cursor 1.0 released about how amazing the usage experience was and how quality the outputs were… Lately that isn’t the case whatsoever. Failed tool-uses with sonnet-4, obvious errors, forgetting instructions explicitly within 3 turns… it has gotten far worse.

Same here, I switched back to legacy and for sure everything is much slower than before. :enraged_face:

Lately, any model I use will usually perform its own conversation summary during the initial request and then start performing the same task from the beginning again based on the summary. It’s super frustrating and wastes a lot of my requests.

I’m guessing a lot of you are experiencing this same issue.

1 Like

mine was checking the same terminal and trying to fix an error I already fixed by hand and stated my solution in the prompt itself

I had the best experience when Sonet 4 just launched which cost only .75 requests. For now? :sweat_smile: