The forum is too negative. I appreciate Cursor a lot

Most of the feedback is to help Cursor make smarter decisions.

Much of it is like that, yeah, and there’s definitely valuable feedback being posted regularly, from many different people.

But there are also posts that are just over the top. In a topic earlier this week, which seems to have since been removed, someone suggested suing the Cursor team because of bugs found in the app.

Also, in my opinion many posts seem to focus a lot on AI models failing to accomplish their tasks, pinning that on Cursor when it’s more likely the AI model itself at blame. In this regard, I think more guides are needed on the forums, in order to teach people how to boost the quality of AI coding.

I am also looking at this from the perspective of a new potential Cursor user/customer: if I didn’t know anything about the product, I could see myself going on the Cursor forums to see some actual impressions from people who are currently using and/or paying for the software. In my opinion, the amount of complaining that’s taking place, of which some is constructive but some not, can create a false image of Cursor possibly being a bad product because just look at how many issues people are reporting, essentially saying that Cursor is just going in an overall bad direction.

I guess this is partly to be expected, given that 1) we tend to complain when we experience friction with the tools we’re using, but when we’re happy we just keep using them quietly, and 2) many people are paying for this and they expect the best quality possible.

That’s partly true. Cursor’s custom prompt language (based on JSX syntax) (It’s most obvious after applying 3.7 thinking, it shows the entire prompt of the cursor in the thinking section.) can sometimes make the model respond poorly, and they should take responsibility for that. Also, they used to have a model router that would redirect users hitting a hidden request/minute limit to cursor-small without any notification, even though they were still deducting premium request credits.

It depends, Cursor doesn’t have a specific roadmap or beta strategy (e.g., with WebStorm, I get to experience UI changes a quarter before the official update.). They make sudden changes and force users to change their behavior when updating from 0.45 to 0.46+, which leads to a lot of frustration, including for me. I’m still using 0.45 to disable updates, but I still check new updates to get used to them gradually.

Overall, I still love Cursor. It’s just that their user base is growing too fast and they can’t adapt in time, or maybe investors are asking them to be profitable… Hopefully, the Cursor team can recruit more talent to grow their team.

2 Likes

@Shelomoh We all do like it, but if we don’t complain about bad things it won’t improve. Users feedback is infact making it better.

3 Likes

Let’s break this down … I’ve got my popcorn right here.

100% Correct - This is very true, and I really hope this product takes off.

Yes … this is what we call ‘a signal’ It means that this product is not meeting expectations. But that does not mean it is a bad product. It means that the product is not doing what the USERS think it should do. This does not mean it’s a bad product!! It means that the PERCEPTION of the product is different to the FEATURES of the product. This is a BIG SIGNAL, and the frustration comes from this product team not recognizing that, or even worse - seeing it and doing nothing to correct this massive business problem that they have. The business problem IS NOT the product performance, it is that their captured audience, who may be different from their target audience, has raised their voice in anger “support” and vocalised their demands “feedback”

I agree here! Well, most people posting feedback!

I don’t think that this is the case… prompting is a completely different skill from being an engineer. I think this assumption is hurting the AI IDE World massively. Much better PR would be appreciated. This is not a Garbage In, Garbage Out scenario, while I do agree better prompting helps, a very competent creative orator, can achieve good results with an AI IDE.

Nail → Head! This… is the problem - We’re PAYING to be guinea pigs, at least that what it feels like. (Not just cursor - this is all over the place) Great comment @DanEdens

And this is the issue the AI IDE industry currently has - even though it is evolving, we’re all very impatient! Yes, that’s bad, we’re just very hungry I guess.

This right here - I NEVER join forums, but I want this product to win so much, here I am writing essays

This right here, @co50 is spot on! The thing is, we’ve seen it many many times over the last 20 years, and … well… consumers do not like when we notice.

Fingers crossed that this is what is happening behind the scenes.

I mean for me the golden rules are

  • Don’t break what they already have
  • Don’t take away what you already gave them
  • Don’t ignore the ones who pay you
  • Your bills are your own - don’t pass costs onto them without telling them
  • Tell them when you make a business choice that will cause them to raise up pitchforks
  • Get yourself some pitchfork protection!!

Split your releases. If you want to release fast and wild, no problem you can do that have an experimental branch. Even have a nightly branch - THAT, that is soo much fun.

Anyway, I’m done, sorry for the really long post - so many people made some many great points from both sides of the fence.

5 Likes

Based on my experience, this issue isn’t related to the cursor. It’s simply how LLMs function — sometimes they magically understand and execute tasks perfectly, and other times they make mistakes that even a junior developer wouldn’t make.

I’ve encountered this even when developing through a chat window without using a cursor. I see it consistently while working with LLMs with or without cursor; the success rate is never 100%. Even the best models fail about 10–30% of the time, and occasionally the errors are so glaring that they make me want to do a triple facepalm.

I’ve observed this across all versions of Cursor, ChatGPT, and other LLMs I use. However, with Cursor, I’ve noticed that actual bugs are often fixed—though new ones may occasionally appear, the overall trend is positive.

I’d bet that if you analyzed it closely, you could even correlate performance dips with astrological events, like Mercury being in retrograde. But that correlation wouldn’t imply causation the same as with versions.

If to listen to people who are crying in each topic that cursor getting worse and worse and golden version 0.4.1 etc. I probably wouldn’t able to work by now where in fact I’m getting more and more productive each month with the Cursor

You’re shifting the blame entirely to the models and attempting to gaslight me.

If you honestly believe that, then you don’t understand how Cursor works.

Cursor’s IDE explicitly indexes your code and performs a variety of home grown ML models, prompts and RAG using turbo puffer over your entire project then pass that context into the LLM. They are a major cook in your code kitchen.

Thus when then make a new version they change these underlying parameters, thus changing the behavior of the LLM.

Again, the performance of an LLM outside of the core model itself, is based on the context you give it. Cursor is providing said context of your codebase to the LLM: what are the relevant files, directory structure, its own LLM snippet about your project etc (at great cost to them — an important incentive that will likely mean they will make serious tradeoffs whenever they want).

The next actor is you, the coder providing the context for the specific need to the model. But they agreement is tacit: the coder provides their needs, and curso ensures that it gives the LLM what is needed.

You are not chatting directly with the LLM.

So no, I’m not correlating Cursor’s performance to astrological events. I am correlating Cursor’s performance to…check notes…Cursor.

2 Likes

I appreciate cursor as well, but lets not broad stroke brush criticism as negativity. Helldivers 2 was becoming unbearable to play. And media was painting the criticism as negativity. But thanks to the criticism, Helldivers 2 developers were properly informed why the concurrent player base was going down. When they addressed and fixed those issues, Helldivers 2 drastically rose back in popularity.

It might not seem to you that criticism is a positive thing. But time and time again it has been proven to help products lot.

1 Like

The problem is that its using token for example reading code, then editing it and in editing it, it decides to fail and its just loading you can wait for hour and no response, but did it read the code and used tokens yes, did it edit code no, i’m talking about MAX model. that the biggest problem, and some days its like fighting with the rock some days its working flawlesly, as i notice its by the day how the sun will shine. version 0.48.6 i know it’s like beta, but AI editing is almost core of this aplication.

I’m surprised how many people in here are using 0.45 and disabled auto-updates as well. I won’t upgrade until I can move the AI chat to the left side panel again. Idk how anyone is still able to live with that 0.46+ bug.

1 Like

@runpaint I have a AI Chat on the left side on bleeding edge Version: 0.48.6. However, Im a knuckle head who likes bleeding edge but still wants the option to ROLL Back easily.

Hi, new to this forums so maybe asked before but what is the definition of “failed AI requests”. One of my solutions is a multi project solution where the actual solution file sits inside the web project folder unlike how it usually sits outside of all the project folders. Cursor struggled terribly when trying to build and comile my project and would blow many calls that simply failed trying all sorts of things that simply did not work. (btw solution is c# .net blazor)..do those calls count? I have blown my limit and I suspect many of those calls woud not have helped.

Anywa, on the main topic here. I am a c# dev of about 20 years and I remember all the code we had to write, the large teams of devs and huge amount of time that is now achived in seconds. Scary but also amazing. AI and this tool in particular is amazing

They just need to bring back @codebase

3 Likes

Hi, I just started to use Cursor, I’m not a developer, and I manage to work and create a demo-prototype for one of my projects, I think 4 hours saved me one month of developer work and lot of money, BUT, when it runs into many issues with trying another project with Python, it failed, with Recat, Node, all failed, then it was messy folders, i asked him to delete all the project folders, big mistake, since my root project folder was under my user, he deleted almost all the file, including on my OneDrive! this is not good, I managed to recover, hopfully all the files he deleted, anyone run into such surprises?

It’s usually the people who find an annoyance those who end up coming to the forums for help or venting out, it happens to most products.

I freaking love cursor. As a dev with many years of experience, I’m finding it amazing for navigating large codebases, reasoning about complex class relationships, and for coding very complex algorithms.
Those who think of it as a tool that you’ll just prompt to and you’ll get a working product, I believe are using it the wrong way. I usually review in detail the output from Cursor, many times I have to retry the prompts a few times, etc. It’s not fair to compare it with a “perfect” and inexistent solution. The product is amazing.

2 Likes

Many people truly don’t know how to code—some can’t even read it—and struggle to use such tools correctly. Meanwhile, seasoned programmers with extensive experience and exceptional skills can produce outstanding work without any AI assistance. Tools like Cursor empower non-coders to create new possibilities while making experienced developers even more efficient—it’s undoubtedly an excellent product.

But here’s the real question: Should we blame those who can’t code? Should we criticize those who struggle with the tool? Should we mock them? Are they unworthy of using Cursor? Should we keep ignoring the challenges faced by non-programmers?

At the same time, should we dismiss the feedback from experienced developers by claiming ‘even these programmers don’t know how to use it’?

Many communities are desperate for user feedback—they’d welcome it with open arms.

2 Likes

In LLM, years could happen in a week. I think eventually cursor will have to build foundation LLM of their own instead of relying on Claude, a small update of which will break cursor’s stability😂

I had a lenghty post written about people making abusive posts and comments. Ruining the community for existing users and future customer, etc.

Wont post it as long as there are people abusive in the forum and hijacking threads.

Also wont be anymore assisting others in the forum because also I experienced abuse here and do not have to put up with this.

You couldnt pay me enough to do this as a job either.
I just read a post where a free user is extremely abusive and claims Cursor is criminal (in that case Anthropic had an outage based on proof from users screenshot).

No matter the issues Cursor has or doesnt have - if its their fault or Anthropics fault or the AI models fault.

It does not excuse abuse!

3 Likes

Also the moderation is toxic too. How this is offensive, abusive or hateful conduct? Is literally a solution! Cursor was stuck and deleting the temp user files will return in a fresh installation and then everithing was working as new. This is a standard fix for any software. But here is a violation!! SMH

1 Like

Hey, this could also be related to someone flagging your message, and the system might have automatically deleted it.

As a genuine user , I simply see this as standard corporate logic—yet it’s framed as righteous outrage from a moral high ground.

‘People are abusing the system! They’re hurting me, corporate profits, and other users!’

Fine, but if companies have vulnerabilities—whether due to technical gaps, intentional market tactics, or other reasons—who knows?

Who even cares? Do users care? Absolutely. They care about privacy, about whether paid and free experiences differ, about value.

Google’s old motto was ‘Don’t be evil’—really? LOL.

Or take *‘Open’*AI—how open are they, really? Ha!

Then look at DeepSeek’s open-source move: the U.S. and Europe panic, crying ‘National security risk!’ Oh? Really? HAHAHA.

Those corporations whine, ‘This isn’t fair! Why open-source? (How do we monopolize and profit now?!)’ Meanwhile, users rejoice—more choices, better products, freely accessible. Brilliant!

Yet those same corporations bark from their moral pedestal: ‘Oh, you cut prices? You open-sourced? Fine, we’ll offer free products too! (Temporarily, of course.) ■■■■ open-source—how else can we overcharge users? Stop this!’

Pathetic. It’s classic Silicon Valley: in early markets, every company fights to dominate with their own strategies. But is monopolizing realistic? HA! Honestly, I’m thrilled. Competition benefits users. Want a monopoly? Prove you deserve it. check this out:

And some users? Baffling. Why defend corporations unless you’re tied to them? If not, there’s another reason—only they know what."*