This requires a separate investigation and may even require a full investigation. I’m almost certain that Cursor is mixing its own garbage prompts into your agent prompt to artificially prolong the task. The algorithm works something like this: You send a prompt to the agent, it gets sent to the cursor, and the cursor mixes its own commands into your prompt, like “First, do the analysis, draw a diagram, spend as many tokens as possible, don’t solve the problem right away, let there be a small error, solve the problem in step 10,” and so on! As a result, the agent leads you in circles and solves a simple problem on the 20th try, extracting more money from you.
Why did I decide this? I assigned the exact same task to the same agent in Cursor and Windsurf. In Cursor, the agent led me in circles, making stupid mistakes, performing redundant analyses, deviating from the task, refusing to follow simple commands, and so on. Windsurf did almost everything on the first try!
Draw your own conclusions. This requires investigation! If this is confirmed, it’s pure theft. But I can’t say anything 100% for sure; these are my observations from months ago.