The current issues with cursor aside, as a newbie programmer I am wondering which one of the ai models you find are generally best for working with programs of this size and larger. In particular I’m wondering which ones you find are best at not creating problems when fixing one or adding a feature.
Claude Sonnet 3.5 and 3.7 are both fantastic with Python. However, if you are talking about a 3000 line monolithic single python file, any AI in cursor may have issues with that. Otherwise Claude variants are champs at Python
Thanks, I’ve been having many of the same issues that others are reporting over the last couple of days but I am unsure at this point where there the problem really lies with Claude or is an issue of Cursor implementation. I seem to be having fewer issue this morning but as I have not seen an update today maybe it’s improvements on the back end. Hopefully I’m not jumping the gun!
Sonnet is great, but I would suggest try getting AI to break that code carefully. If you are working with AI, they all struggle when code is 1000+ lines. Modular code is great not just for humans but AI too.
It’s funny, I am 70 years old and just getting back into programming. Many years ago I was a dBase programmer and my largest program was ~14,000 lines of code all written by hand, back in the day when you had to do everything including creating the user interface down to lines and other elements on the screen, without any type of real assistance other than formatting! Yes, I understand… I’ll try to break the code into chunks, thanks.
haha I get you; I have also handled thousands of lines of code in one file few years back, but the difference was that it took us months. Now we can do all that in days.
Maybe someone knows what is the optimal number of lines for cursor?
ive noticed declining preformance once your getting to 500+ lines, i would definintly refractor once any files over 1000 lines.
write a rule to have all files over 400 lines of code broken down into modularized components. never go over 400 lines of code imo, unless you actually know what you’re doing and can assist the model you’re using
i tried using such a rule but it seemed the model was simply ignoring it. Modular designs are simply the best practice regardless.
hi, is it difficult for you to spread the code across different files for future convenience? it will also be better to understand the context (for ai), and not look at the contents of the file with 1389571235872 lines of code for 100 years.
From my experience and comparison between Python and C# the key is the code structure itself, not just number of lines (this can be adjusted by using LLM with large context settings). Naming is 50% of success - the more human-readable the code is, the better. Document strings for everything and comments for long workflows in between is another 30%. Strong typing in C# helps definitely to catch code structure, say, another 10-15%. The rest is your skills to ask correct question - LLM needs some tips to be in the question itself to find the correct answer.
I’m separating functions out now. That’s new for me… never had to do that with dBase!
Man… dBase was great. Eventually made several programs for the company I was at with Clipper Summer of 87, then eventual phasing out to other things after Clipper 5.3 (I think it was 5.3).
Tens of thousands of lines of amazing handwritten code. I found a tool that helped you build text based menus and that was a game changer.
Lol. Thanks for making me go down that memory lane.
can literally ask agent to ‘please separate this code into more manageable chunks for me’ and get pretty compelling results. best of luck
Yes, I used a great tool called Snap that would format your code, document all the dbf’s and relationships, and basically make it look like I knew what I was doing! Also Beta tested and used a Clipper competitor, dbfast. Amazing what we could do on a 80286 with 1MB of RAM…