The main issue I face when using cursor and LLMs in general is that as my project has gotten more complex, the effort it takes to craft a prompt that has sufficient context and explanation for claude/gpt4 to properly implement whatever change/refactoring I want is so extensive that I often just do it myself manually. For writing code from scratch it’s still fine but refactoring has become a pain. Not really sure if there’s a solution to this.
I think it is similar with human developers by themselves. The more complex the project becomes, the more complex its maintenance and debugging. We should expect the same with AI. The good news, I think, is that with new developing of AI models and systems interoperability (with AI), the easier will be for AI assistants to interact with our huge codebases.
The only solution I’ve found is implementing rigorous modularity. Divide up code into as many functions as possible, and keep classes in separate files. Cursor indexes your files based on functions, so you can make more targeted changes to your code if you isolate it as a function. And Cursor still struggles to grok any file longer than about 400-600 lines, so that’s when I try to split them up.
The only thing I want to know is why sometimes the changes are automatically applied for me, and at other times Cursor makes no attempt and I have to click ‘apply changes to entire file’. I loved the automatic applied changes feature and it seems to have been curtailed in the past week or two.