Artificial extraction of tokens on the cursor side

This requires a separate investigation and may even require a full investigation. I’m almost certain that Cursor is mixing its own garbage prompts into your agent prompt to artificially prolong the task. The algorithm works something like this: You send a prompt to the agent, it gets sent to the cursor, and the cursor mixes its own commands into your prompt, like “First, do the analysis, draw a diagram, spend as many tokens as possible, don’t solve the problem right away, let there be a small error, solve the problem in step 10,” and so on! As a result, the agent leads you in circles and solves a simple problem on the 20th try, extracting more money from you.

Why did I decide this? I assigned the exact same task to the same agent in Cursor and Windsurf. In Cursor, the agent led me in circles, making stupid mistakes, performing redundant analyses, deviating from the task, refusing to follow simple commands, and so on. Windsurf did almost everything on the first try!

Draw your own conclusions. This requires investigation! If this is confirmed, it’s pure theft. But I can’t say anything 100% for sure; these are my observations from months ago.

It seems unlikely Cursor would do this since this just makes their product bad. It would be more believable if you were accusing Cursor of just spending more tokens but still solving something efficiently instead of looking dumb.

Can you provide screenshots of Windsurf’s request and response vs that of Cursor with the model and context visible in the screenshot? And is this a pattern?

“Spent more tokens” – how do they spend more tokens without interfering with your prompt? Don’t you think they’re the same thing?
I can’t give examples, as I’m using them on our real projects. Install Windsurf and try it yourself; it offers a full free trial.