Manual selection for model optimisation vs auto mode

Hi,

I’ve seen similar posts but just want to know what current developers best practices are, or at least know how to get the best results.

I’m new to Cursor and I’m developing a full POC for a SaaS in Cursor with the additional help of 4 other agents in Chat GPT (Architect, product solutions Eng, PM and Orchestrator). I have a common source of truth in 6 files. The GPT agents change the files via patches that I ask Cursor to apply.

I use Cursor to generate the code using the architect’s prompts to generate a technical proposal that then I review with the architect again to then approve the code generation and I’ve been using auto mode for model selection and agent mode since the begining.

This system it’s been working quite well and I always check and undestand all the code that Cursor generates (which is a lot).

But the project is getting bigger and in the last stages I’ve noticed an inrease in mistakes that I need to ask Cursor to fix and then verify again. Sometimes is simple/silly/obvious mistakes and others are more conceptual things. Is that normal when you develop with Cursor?

Also, the last stage I’ve run, generating the technical proposal and the generating the code took like 92% of the context, which I know it’s not good, so I’ll plit those 2 processes in 2 chats next time, but I wonder if the model selected would affect the amount of context used as well

I just want to know if by selecting the models myself I can improve Cursor’s efectiveness and in short, how you pick models or if you use Auto mode at all for code generation or modification.

Basically, learning from the more experienced developers here.

Thanks in advance!

Cheers, Clau