Cursor feels like a gambling casino…

Not trying to be dramatic, just want to see if anyone else is noticing this, what feels like exploitation.

Using Cursor kind of feels like gambling. It starts off great. First few prompts, everything’s flowing, you’re making fast progress. You think this is it, it’s gonna build the whole thing. Then you get to that 80 to 90 percent mark and it starts going in circles.

You fix one thing, it breaks another. You ask it to clean something up and it rewrites the whole logic or starts adding weird features you didn’t ask for. One step forward, two steps back.

Every message is a request (give or take). You get 500 for 20 USD and after that it’s pay per request. This month, for the first time since I started using Cursor mid last year, I’ve gone over 145 USD in usage. I’ve never gone over 30 USD a month before. I’m using it in the same sorts of calls, on the same kind of projects. Nothing’s changed in my usage. But all of a sudden it’s chewing through requests like crazy.

It feels like it’s getting better at making you feel like you’re close but actually performing worse overall. Like it knows how to keep you in the loop, constantly prompting, constantly fixing, constantly spending. One more message. One more fix. One more spin.

And this isn’t just on big projects. I’ve seen this with full stack apps, SaaS tools, monorepos, and now even with something as dead simple as a Google Maps scraper. What should’ve taken me 1 or 2 hours max has turned into a full day of prompt loops and it’s still not finished.

Not saying this is some intentional dark pattern but that’s what it feels like. Like it’s built to keep you thinking you’re almost done but not quite. Just enough to keep paying.

Anyone else seeing this?.

2 Likes

It’s the most common criticism towards building with AI, the problem is clear, AI doesn’t care about code getting big and later you find the hard truth: either spend enormous amount of money processing the entire context through external tools or you optimize your code to work better with AI, check VSA: Vertical Slice Architecture for complex projects

2 Likes

I have it pretty similar from the beginning of my Cursor journey, but I think this is common for the AI models at their current state.

PS last month I reached the 500 queries limit for the first time, and it was just after 2 weeks of a new billing cycle.

Its like with any new technology, if you understand it and learn about it the risks and pitfalls are much less.

While at begin coding didnt seem to be going well, with learning about models and prompting I managed to get very consistent and quality results.

Then after new types of AI models were released (hybrid/reasoning) the same prompts were not working well. But that was expected as its like starting with new frameworks in same programming language, you have to get used to them.

Now with Claude 4 Sonnet its working so well even in Max mode that I cant imagine doing it another way.

If it feels like a casino there are a few common reasons:

  • Current code quality is not up to standards
  • Not enough preparation or details provided to AI
  • Too many or irrelevant details provided to AI
  • Not enough knowledge of AI models used.
  • Not enough knowledge of prompting and how to achieve good results
  • Not enough practice :slight_smile:
2 Likes

agree

1 Like