Hi @hodela thanks for posting your feedback.
From my personal experience Background Agent is very effective. I can start them while taking a taxi, from my phone and tell them to fix a bug or to implement a new feature. Usually with Sonnet 4 Thinking also. There are some cases where I have to provide more information to complete the task or provide corrections but overall its 10x faster than coding by hand, and if I have to change code its mostly just config files. Very realistic and well working. Actually surprisingly good!
Different people may have different experience naturally, that can depend on the programming language, framework, setup, project code quality, choices made in project structure & architecture,… but also prompting, which and how many rules are used, amount of context attached or referenced,… .
Cursor has different team members working on different tasks and from my personal experience, as I am not Cursor staff and do not see what they do internally, they did a great job improving the app, improving their services, quality of AI processing of tasks, speed of AI response, and many other very important details for us developers. Background and Web Agents are amazing and allow parallel tasks where in the past I had to wait for Agent to finish output slowly.
As you explain, you write very detailed instructions, explain to AI how to write code … this does not sound good. There is clearly an issue that you should check in detail starting from simple prompts and proper project setup.
For example here is my workflow, feel free to compare with your steps.
I removed most rules, removed detailed explanations, and wrote following as the main and only rule:
Follow SOLID, DRY & SRP.
(Mentioning the programming language and framework can help as well)
Next I write a basic features.md file in docs/ for new feature to implement. Literally just the requirements without saying what, where or how needs to be changed, then I tell Agent (Web, Background or Desktop):
Analyze requirements in docs feature_name feature MD, check existing code to see what needs to be changed and create an implementation plan as MD file in same folder.
After that it created a quite good implementation plan, I gave it some adjustments, with short reasons why this is important. This is normal and to be expected, even humans need frequent feedback.
Adjustments:
- Change … to … as this improves stability
Next I ask Agent to implement the changes:
Implement features in implementation plan in docs feature_name and update there a progress.md file.
Agent does that and once complete sometimes adjustments are required, but most of the code is really good and correct.
What happens:
- Agent checks my codebase and understands it well.
- Agent writes code or documentation, depends what I ask it to do.
- Agent writes tests, very important, but if I just wanna try things out and tell Agent not to write tests then it does not write tests.
- I spend more time reading code and checking everything is correct than agent takes to write code, and it writes about 20 000 lines of code per day. So not in any way slower than coding by hand or using Cursor Tab.
I have experience with Copilot, Claude Code, and several other similar tools/IDEs, and so far none is as advanced in code output as Cursor.
Rest assured that Cursor focuses on Agents writing code as this is their business.
If you have specific issues with your prompts, AI performance or results please let me know, I would be happy to check with you how your results can be more improved. It should not take AI more time to code than you, I agree completely.