Generative AI should not reverse the developer–tool relationship

Generative AI should not reverse the developer–tool relationship

Hi everyone,

I’d like to open a serious discussion about a core issue I’ve experienced using Cursor, from the perspective of an advanced user.

I’m a senior developer. I know my practices, my architectures, my workflows. I use Cursor to improve efficiency — to delegate specific actions to an AI that can save me time. I don’t need help understanding technical concepts. I need things to get done.


:red_exclamation_mark: Daily experience feels reversed

The AI constantly suggests unverified or incorrect ideas, then asks me to test, check, or fix them.
I regularly get responses like:

“You should look into this.”
“Maybe the issue is here.”
“Try updating this.”

Instead of just doing it.

This isn’t about technical limitations — it’s about posture.

I’m not looking for a manager or a tutor. I need an effective executor — an assistant that takes clear directives and produces clean, contextual code.


:brick: Example

I asked for a simple Angular toggle button that calls a service and logs the state. What I got:

  • an overengineered pub/sub system
  • broken async logic
  • messy code that grows with each patch
  • repeated prompts asking me to verify or take action

All this for a one-line toggle.

That’s not acceptable for a tool at this level — and especially not for a paid service.


:light_bulb: What do we need?

A serious generative AI should:

  • read the context
  • understand the instruction
  • execute, fix, optimize
  • and always do so in the direction the user sets

As senior users, we want to direct the AI, not be directed by it.


:megaphone: Open question to Cursor and the community

How can the tool evolve to better support senior-level workflows, where the AI acts as an executor, not a decision-maker or instructor?

How can we create a smoother, more effective relationship, where the AI takes ownership of its role and the user stays clearly in control?

Looking forward to your thoughts and feedback.

1 Like

hi :slight_smile:

What you are saying makes sense and this is perhaps how I approached from similar position / experience the usage of AI.

One issue with the fact that AI ‘should know’ is that, currently code is too much empty space tokens and repeated structures for AI to be able to a decently sized project all in context and not get confused.

You can or have to counteract and prevent AI from doing imaginative work on code.

  1. Clear rules based on your team or project development guidelines, standards, etc.
  2. Project outline, premises, design or structure choices made.
  3. Give AI a persona, like a Senior Angular Developer for example, this helps it.
  4. Definitely you need to task the AI to do something specific. It is fine to ask AI to review project or suggest improvements, but actual tasks must be assigned specifically and separately. Not much different than when giving a Junior dev an assignment.
  5. AI is trained to provide helpful responses. Trying to be helpful and being helpful are two different things. You can ignore or prevent such output. Under no circumstances answer questions to AI with just “Yes” :slight_smile:
  6. It needs project and task planning so AI can follow the instructions. That will eventually be handled by an AI manager :slight_smile:.

Overall I completely agree and think we should plan thisto identify steps and how this should work.

AI tools for automatically analyzing projects and why historically choices were made in a certain way, are not there yet. In a few months they likely will be.

Cursor as a whole operates not just with developers but is used also by writers, analysts, marketing people, etc. Then there on development side you have users ranging from non-coders/vibe coders, to junior devs, senior devs, analysts, and so on.

I can help with organizing and structuring the approach, which several of us here in the forum have tested with various Cursor integrated tools and several project or task management approaches. There is a good amount of experience here that can separate what is needed in near future and preparations for later on when AI becomes more advanced or automated. So far its definitely more likely to be a risk for any sensitive projects as code generated by now looks reasonably correct, but has sometimes hidden flaws the AI doesnt yet understand.

Let me know what you think and how you would like to approach this.

You might want to try out this project — its main purpose is to help agents better understand your project:

Thanks for sharing the MCP. I read the readme but could not see how it actually works. Does it do all its steps locally or does it send requests to any 3rd party AI (outside Cursor MCP calls and the in Cursor configured models)?

Thanks for your question!
No additional third-party AI is required — everything runs using Cursor’s own configuration. Shrimp is essentially a prompt template system. It guides the agent through a series of carefully crafted prompts that simulate a human-like development process: gathering related information and code, checking for reusable components, and finally generating code that aligns closely with the project’s architecture and style.

1 Like

Maturity takes time. As with any tool set, it effectiveness grows with experience and the quality of the staff making improvements. If you have a junior developer, you provide them with detailed, explicit instruction and then closely monitor their responses. As their manager, you get to understand strengths and weaknesses. You respond to them to make them better, As they get better you can lessen your detail. Such is the way with AI based on my limited experience. I provide the most extensive details I can about requirements for a component. During testing I see Cursor’s limitations and now make sure to stop and reset interactions. Hopefully, as was the case many years ago, they are using these interactions to further train the model. In one year, we can assess where they are against where we see them today. That is the true measure. One final thought. The effectiveness of any tool is how well the wielder of that tool uses it. Adapt and prosper.

Completely agree, we have to use the tools provided and learn how to use them correctly :slight_smile:

For data usage for training the internal model, you have an option in Cursor Settings for Privacy (on/off), when active it prevents Cursor model from learning based on user prompts, recommendable for sensitive projects. But for non sensitive stuff you can leave it off if you like for their models to improve.

However Google(Gemini),OpenAI(GPT),Anthropic(Claude) are accessed via API and data processed by those LLMs is never used for training.