A couple of words to the Cursor team

Cursor is really working very well, everyone don’t try to force the Cursor team to accept losses so that you can have something super cool while spending very little money. I used copilot for a long time, I was really impressed and copilot helped me work very fast. But right at the time I tried Cursor, I had to decide to remove copilot. Actually for me, before knowing Cursor, there was no AI in the IDE that helped me work faster and more effectively than github copilot. Really thank the Cursor team, I wish you all good health and success in your work. BUT!!! Don’t remove Cursor. :blush:

12 Likes

Can second that, while some people have issues, most of those can be ironed out due to their own system issues or sometimes heavy usage going on Cursor limits.

Incomparison with legacy professional IDEs Cursor is a thousand miles ahead. So far none of legacy IDEs AI integrations is even remotely close.

Like programming manually, or using frameworks with CRUD / scaffolding, or using AI assisted tools, there is a learning curve.

5 Likes

We are aware that the performance issues are not completely on Cursor Team’s shoulders, since they are just integrating an IDE with cloud services, and issues are usually upstream.

We are also aware that some usage might be costing them more than they take from users (thank VCs for the allocation), and that they will probably ask for more money for the same features in the future.

However.

Inference cost performance improves 10X YoY for the same results. That just means, we all, everyone, has to ■■■■ it up right now and pay the pioneer’s share. The prompt from today will cost 10 times less next year.

While this is true, the opposite is also a problem, inference providers are dumbing down the models as time goes on to save costs and hope we don’t notice. So the same prompt today, will provide more stupid results next year.

2 Likes

That’s why we need fine-grained control. We should be able to configure the model, the tools that can be used and how many times, and how long can it think, all for a known price per token. This is already in the APIs, so just add a reasonable mark-up and pass the cost along to us!

4 Likes

This is now mostly resolved in Cursor. The other things not so much.

Why bro :)) I am using cursor extremely effectively and it helps me too much, must say beyond expectations

1 Like

Yes, the problem of cost increases in the future is a worrying thing for me too bro: ((

Yes, Cursor is performace the best, the only drawback is Cusor is really expensive…

1 Like

It really isn’t tho. You have to remind it constantly that it’s allowed to run tooling without asking. It honestly isn’t that useful to have it run tools if It’s a manual click from me every time :\ I’ve tried all the things but it doesn’t stick xD

1 Like

Yes, exactly, and I just wonder: why perrplexity.ai can let annual pay useer usng claude model unlimilty, but cursor can’t?

1 Like

Because it doens’t make 25 API calls every time you type something in lol. It’s significantly less

Hopefully, the other not so expensive model would finally become the benchmark for the AI coding, like DeepSeek R2…

and frankly, if the current vscode copilot is not so dummy, I won’t pick cursor… I can image that it would cost me more than the Darvin($500 per month) if I stick on using the new 3.7 MAX…

1 Like

Every time I start a conversation with Claude or any model, when the answer received is just how to change the code, I will text “Use the tool to apply the code to the file @… help me” and from then on who will automatically use the tool to apply the code without me to remind more

Yes, if Max and GPT 4.5 can be used indefinitely (slow request) like the current Pro package with Claude 3.7, even if it is the Pro Plus Supper Plus ++ package with the price of 50$/month, I will still invest in it because simply, Cursor helps me make money, so it is worth investing. (However, 50$ is a bit expensive for users in poor countries like Vietnam, which is me)

shut up they need to work harder and give us endless tooling for free :laughing:

1 Like

I believe everyone in this community loves cursor & that is why we are all here ! However, cursor needs to be more reliable. Currently it performs well on some days & doesn’t respond at all on few days. All great tools that we are seeing today became great because they are reliable. I am a Pro user and dont mind paying couple of $ more as long as it is more reliable. However, if it continues to be not reliable…I dont have qualms to switch to other tools or other methods. At the end of the day, I want to get my work done.

3 Likes

Regarding to the pricing, I think the core question is not only the expensiveness, but the unstable future—I mean, for the current cursor, we all know 500 calls is not enough for a coder per month. So there would end to 2 ways, 1. you reach out with unlimited cost with calling like sonnet 3.7 max… 2. you get not high quality feedback once 500 calls are costed. Either way ends with a unrobust end.

If Cursor gives a subscription: that let user pay $50 per month with unlimited using of high quality model(say, 3.7 max), I think there would be more people want to use it, since it offers a cost when user know what they could expect in the coming subcrpiton month and users no need to care the subscription issue when using cursor.

1 Like

Yeah, I’ve been using slow requests for the past 2 days and I noticed it’s even slower than the previous slow requests. Something must be wrong bro.

These last few days, this “casino” mode chewed over 90 USD for no result. Nothing usable. Professionals code editors in the late eighties were already doing code completion. So no value added in that. The rest is just not market ready for professional use. Just for youtubers to make click baits to youngsters so they can repeat small code base to impress other kids.

1 Like

Oh cursor might not be ready but there are two that i have personally tested on 2 large codebase of 4000 files and 600 files respectively and the performance was stellar. TDD IS baked in the code generation process, it generates full coverage test then run it against it suggetions, if it fails then it has another agent that optimize and fixes the code then runs the test untill it passes befor suggesting code for you to apply. It’s not cl..ne or winds… unfortunately the mods are very petty so i cant mention the name of the tools. It chewed up my large codebase like it was nothing and maintained full context awareness with 0 hallucination.

2 Likes