Main models like GPT-4 or Claude 3.5 Sonnet are limited by their context windows. Conversations can’t go on indefinitely—they have fixed token limits. Every part of the conversation—code snippets, instructions, past messages—takes up space in this context. When the window overflows, the model “forgets” the earliest parts of the dialogue. This isn’t a Cursor problem; it’s a limitation of all AI. That’s just how they work, unless you’re using a model with a long context window.
Here’s an interesting post on how to work with large projects: