How big are your projects in Cursor for?

I use the claude sonnet model as that is what many people recommend. This model really gives good results for the first 5-15 prompts. After that, it seems to be replaced by something very simple and useless.

LLM is unable to correct even small errors. It just deletes working code. For example, when I tell it that moving a file via drag-n-drop on a form with <input type=“file” /> doesn’t work, it says that we have too many click handlers and composer in response removes half of the click handlers from the whole project. When asked what click handlers have to do with dropping files, the neural network only says sorry.

Even if you still have a project backup, you can’t use neural network queries from this point on, because composer thinks that the deleted lines should be removed.

And it is not always about deletion. Sometimes it happens that the model wants to decorate the interface with various questionable solutions. For example, when you click on a button, shift this button by 20-30 pixels to the right. And you can remove this effect from CSS 5 times, but Composer returns transition to its place at any promt affecting this CSS file.

How do you get out of such situations?

1 Like

I mean, you can tell the model to not do certain things through the AI instructions, and cursorrules files. Whenever i run into issues, i have the models stop what they are doing, review the requirements that i have given them (Stored in a notebook file for easy reference) and have it validate the code it wrote against the requirements.

This is usually more than enough to get it back on track and solving issues.

In your case, you need to define some rules and iterate on what you need it to do. Sometime when claude is stuck, i’ll go to the chat interface, and ill copy over a problem to 01-mini and ask it for a solution with an @web reference, and take that answer back to the Agent Composer, and that solves the problem 99.9% of the time.

Use AI to solve AI problems.

1 Like

This doesnt work all of the time. It will literally ignore system prompts and even in chat prompts and instructions at some time.

The larger the project the more volatile the LLM and its responses become