When I approach 6000+ lines of code, cursor seems to start making really bad, unhelpful decisions. To a point that it’s no longer useful at all.
How are other people managing this?
When I approach 6000+ lines of code, cursor seems to start making really bad, unhelpful decisions. To a point that it’s no longer useful at all.
How are other people managing this?
I split each single function into a single document and made sure that each document provided only one basic concept, which makes it difficult for humans to track and organize the project structure, but doesn’t seem to be a problem for the AI.
It’s also helpful to have good rule files and documentation base, and a organized file index.
You can see the Memory Bank post a few posts down from this post. This is also helpful
Thank you for this! Do you mind me asking how you shape the rules? That sounds really helpful to set the rules first and then build from there!
Typically, I start with an obvious task that I know how to do. I would then suggest letting the AI do some DSA problems from LeetCode first, and iteratively guide the agent to automatically update its rule file. This process aims to give the AI the ability to debug based on specific requirements.
Or you can just let AI start doing the project you currently doing, ask AI to update the rule file every time you are not satisfied with their procedure.
Assigning a completely different task is also good practice for helping the AI generate general rules, as well as improving the way I command AI. For instance, I’ve tried letting an AI read a novel and extract specific information I’ve defined. I then observe what happens to determine if my command was fuzzy or if the AI performed incorrectly.
I think the one thing important is to reduce the ambiguity of user commands and never assume a task can be correctly executed by an AI; the long-context window limitation is still a significant problem.
Furthermore, I believe multimodal execution remains a weak point for general LLMs. While one can always build tools to enhance capabilities for images, audio, or other modalities, inherent weaknesses persist. For example, I had an AI attempt to build an image classifier for simple static scenes, but it performed this task extremely poorly; it seems to be particularly weak on geometric features.
Above all, never assume that my approach is correct, I’m still exploring the capabilities of rapidly developing AI tools.
Maybe it can beat all tasks in a couple of generations, and make things we currently do completely useless.
Lots and lots of documentation. You write a function, or a feature? Task the AI to write a proper documentation for it. (Whats the purpose, how to use, what tech used, examples, where its used, how it interacts with other features - you get the idea)
Depending on your project structure and size this can be lots and lots of documentation files (a single feature, or a single overview per file) - you can also link them between each other to refer to them.
Then just refer to this with @yourdocfile to keep the AI on track.
Works like a charm.