On the Issue of Compressed Cursor Context – For Those Complaining

I started using Cursor in October last year. At that time, Claude 3.5 Sonnet’s context length was 40,000–60,000 (as documented—though my records are gone, I’ve checked the docs many times and wouldn’t misremember). Later, when Claude 3.7 was released, the context was increased to 120,000. So in reality, the context length increased, not decreased—it’s just that most people didn’t know. Eventually, Cursor introduced “Max,” making users aware of the full context, which stopped the constant complaints about Cursor compressing context for profit.

The truth is, it has always been this way. I suspect other editors do the same—they also compress context, but they just don’t disclose the exact length. Cursor, however, openly stated it, and the downside is that once it’s out there, people will complain. Especially new Cursor users who weren’t aware of this from the start.

To reduce costs, context must be compressed—otherwise, token consumption would be enormous. If you really want full context, why not use something like Claude-instant (Claude API directly)? Oh right, because its token costs are massive (input, output, and caching all incur charges). A single request might cost 0.1–0.1–0.3, whereas Cursor at $0.04 is much cheaper.

Finally, let me say this: Complaining doesn’t solve anything—it’s foolish. The Cursor team wants to improve the product far more than we do, aiming to make it accessible to more people. AI is just an assistant tool—don’t expect too much. Idealizing it isn’t helpful; AI has many, many flaws. If it can’t solve your problem, calling it “stupid” and ranting about Cursor online won’t change that.

AI cannot solve every problem. In fact, it performs poorly in some areas. There’s also an element of luck—for example, using the same prompt in two different windows with the same code might yield a solution in one but not the other. The AI’s approach varies each time unless you explicitly specify where the issue lies, forcing it to focus only on that part. Otherwise, its inspection direction changes with each attempt.

10 Likes

Claude-instant that’s a typo. It’s cline.

People have a right to complain, and many people are, for good reason. I’ve been using Cursor for way longer than you and have noticed a significant and progressive degradation in performance that is clearly directly attributable to the progressive reduction in context retrieval volume. I think most people assume it is/should be compressed, the problem is that they are overdoing it to cut costs. They clearly need to find a better balance/strategy.

Complaining 100% does help to solve problems. The community exists to inform the team about things like this. I assume the team values reasonable complaints since teams that don’t never achieve the success that the Cursor team has.

10 Likes

Not when every complaint gets created every 2 seconds about the same issue “Cursor don’t work”, they are doing their best here, if they really want to provide better context, then they’ll have to charge even more and guess what that leads to…more complaining that its expensive and unaffordable. There are reasons for complaints but some of them on the same issue are just repetitive.

I’ve been using cursor for a while maybe as long as you and the context retrieval volume is noticeable but its not the end, if you do research on the files and learn to adapt with it , give it rules, know when to start chat, parse details of files, its good and if you want full context, pay for max.

And yes, they should communicate better and I think they’re working on it with their demand and scale

It’s a good thing we have to complain about problems, otherwise it means everything’s fine.
Honestly, I’d be willing to pay $50 a month if there were more transparency and another package offering less context compression etc…

Anyway, my point now is that we need to start preparing new technology/functionality.
1 - Improve memory and detection in the codebase to avoid unnecessary calls etc.
2 - Find a way to work with 2 AIs at the same time taking the same message or splitting the context. To read a codebase, you don’t need GPT 8.0 (just to illustrate how powerful AI is :D), but perhaps the integrated CURSOR one, and then another AI takes over, aware of what the first is doing.
3 - Why not invent an automatic rule system that we define at the start of the project

I’m just saying this as I write, but I think we can try to find interesting things to reduce costs even further and improve the cursor context.

In any case, we’d like more communication from the CURSOR team, and we’d also like a roadmap of what they’ll be doing over the year.

1 Like

Yes, we all want cursors to get better and better.

Exactly, I wish CURSOR all the best.
What would be nice is for the team to really speak up about it.

One important point is that even with the arrival of GPT 5 and Claude 4.0, which are going to be monsters, we’ll still have this bottleneck in the software. We really need to find a solution and/or propose another package if that’s really the problem.

After that, they announced a few days ago that they were looking for testers for a new feature, so maybe this feature will help.

We’ll see, but please, cursor team, communicate!

That’s why AiCodeKing was stating in a recent video that he is not using cursor or windsurf anymore and replaced it with roo code..

PS: few weeks ago he himself made several videos showing that cursor was performing better roo code and cline

Something has changed where the product has become unusable. It instantly forgets what has just done, you see it continually reading files it just created. The product is unusable

The other guy said it, do not idealize AI.

They are not going to be monster, they will be small progressive improvements and perhaps they will just behave differently like sonnet 3.7 is different from 3.5 not just a small improvement in certain situations.

The proof is all models released in the past year, they are much closer to they predecessor, differences like GPT 2 vs GPT 3 and GPT 3 vs GPT 3.5 are not achieved anymore.

Yes, it has been running fine on my end so far. No degradation of intelligence issues. Cursor is working better this year compared to last year