Figured out why **redacted competitor** beats Cursor on big repos—and the missing capability that Cursor *must* add to agent (or clarify)

this hits home for me, i’ve opted for a larger code base to write custom MCP tools that push Cursor into the direction it needs to head to understand context to tailored specifics than just “blind faith”

For instance, I do notice few things

  1. despite all the rules you give it, after the first prompt or subsequent prompts it just decays over time, it gives up eventually on following them so the whole promotion aspect goes to bad quickly … competitors they mostly reinforce it through different techniques.

  2. As the chat builds in size, the decay creeps aggressively unless its repeatable patterns where it builds on it self.. ie a major refactor where you swap say async to sync or sync to async in some operations.. because the refacot itself is repetitive the “context” has a false notion that its “intelligent” can can navigate the problem. Its just merely the pattern is conistant and recogniseable.

So long chats = bad outcomes especially on large code bases, which means to your point you’re reducing to repetitive “let me look at this file” grep pulls/searches and so on.

  1. Introducing custom tooling where instead of pushing it to rely on what it knows in tools, it in turn asks these tools questions and they themselves retain context its able to bounce off those and get back to the point. However, what i’m merely doing here is really just hijacking the editor for cheaper model costs.. in reality i just made cursor a slave to MCP tools that i custom wrote to solve problems for a large code base that i had to do 10x more ground work to make it work with Cursor… so the value prop is… i just paid $$ a month for a cheaper usage of Gemini 2.5 :smile:

  2. Gemini 2.5 for me outside cursor can nail context well, in that i’ve given it complex C# code base and given it refactors and told it “validate dont guess” etc the operation signatures. It does a pretty ■■■■ good job of almost one-shotting it (minor guesses here and there especially when operations dont have suffix like GetThatThingAsync but is GetThatThing with an async return" (dont ask).

Cursor… it completely goes all in on its guesses despite the rules, despite the re-enforced prompting.

Other tools i don’t find i have this issue as aggressively. They drift and have their share of classic “Ai having amnesia / mind drifts” moments, but you have tighter reigns to bring them back to the center when that occurs. Here… i probably spend more tokens now on “You piece of S**T AI, GET THE ..” so on…

I should point out, this was not a major issue prior to the last -2 release cycles, the tool has gotten worse for me over the last 2-3 weeks than it was prior to that.

So they are modifying the recipe somewhere especially with Google Gemin 2.5 where i would say a lot of my monthly consumption is having to tell it “proceed” or “you didn’t edit that file, try again” over and over and over.

I’d say at least 30-40% of chat consumption is lost to the AI failing to find tools or just bottoming out for no apparent reason.

EDIT: Hiding this thread for “community reasons” is just so weak.

3 Likes

lol yea. This wasn’t meant to be a rant or gratuitously critical, but informational and constructive.

I’m done. No answer, no communication and no plan to fix the problem. Instead, it’s censorship. Not a good look and not a company, as of now, that I’d like to bet my venture on.

Well, I guess I got my resolution.

2 Likes