Cursor will win the race against Claude Code

I’m convinced Cursor will win the race against Claude Code.

Our team is obsessed with using things like Cursor Background agents to maximize developer velocity. We want to get to a point where our codebase is updated so frequently by beehive of agents buzzing around improving the codebase (fixing bugs, improving documentation, increasing test coverage, reducing redundancy, optimizing metrics like cyclomatic complexity, etc.) that it feels more like an organic, quickly evolving entity rather than what we typically see of a codebase today.

Obviously, this a bit far out.

But it seems like all of the tools like Cursor are driven to the goal of stretching the time in between which human must be in the loop as much as they can. Let’s be real though, frequent intervention will likely be more necessary for a longer period of time than some people may hope. And imo, Cursor as a VSCode fork is just the easiest and most natural way to have to intervene. I’ve tried other agentic tools and the best workflow is one where I can open up what the agent is working on in the IDE setup my entire team (and most devs) already use. Opening up Background Agents in their remote environment feels just like walking over to an intern’s desk and troubleshooting with them at their monitor.

Cursor seems to be the best medium-term solution for us and likely many other people, and I hope the Cursor team can use that advantage to train some crazy powerful models with RL in the long-term. Kudos to the team for their work on Background Agents. We’re looking forward to where things go from here.

Also, yeah, we like to not be locked into one single line of foundation models :).

1 Like

Are you factoring in Complexity? I’ve found 03 to be a lot slower when it comes to complexity. I do see that 03 really takes it’s time and thinks things through 2 times prior to creating code. This is a great feature, but from my experience Claude Opus 4, and Sonnet-4 both rule them all.

Complexity:

  • Storage
  • Two layer of technology
  • API to Ai, with lang-graph
  • Involving 6 States that all have differnt settings.
  • And writing and displaying code in the UI.

That’s a good point, but the token output speed of the model is not our bottleneck. I can see it being somewhat of a bottleneck during pair programming with the AI, but once you start juggling 3 or more Background Agents or even Cursor tabs running at the same time, it’s hard to keep up.

I will say though, Cursor is currently eating the cost of hosting the Background Agents on remote servers, which is a very significant cost. I’d guess it would about double the cost of running the AI in Background Agents vs locally in a Cursor tab. Which we would be okay with but not necessarily happy with.

3 Likes

There is no issue with cursor’s rate limiting strategy itself, but the setting for the rate limiting window is too aggressive. Most regular developers have scenarios where they work intensively for several hours at a time, so expanding the rate limiting window to twice its current size would be better.

1 Like