Cusor keeps forget what I say. How to improve?

I’m currently using Cursor AI Agent for a fairly large project at my company. While there are many things I like about it, one of the most frustrating issues is how it handles code modifications and additions.

When I ask it to add or modify code, it often writes it in strange or inefficient ways. Through trial and error, I’ve found that the best approach is to first direct it to specific folders or files and have it study them before coding. This significantly improves the quality of its output.

However, the problem is that the AI quickly forgets what it has studied. As a result, it keeps making the same or similar mistakes repeatedly. I then have to show it the same files again and remind it not to forget, which is frustrating and time-consuming.

This repetitive cycle is becoming a major inconvenience. Do you have any suggestions on how to improve this situation?

Something that has helped me is having very thorough code commenting requirements in my global rules as well as .cursor/rules for specific project requirements.

If it is my first time working on a large file, I will ask the agent to start with analysing the file and to add comments to it before starting. This way when it is searching through the code, the commenting is part of the context it uses to understand the coding.