What is the optimal code line length per file

I am a new to vibe codeing, I am running into the tool having major limitations of larger code lengths. In my .cursorules, what should be the optimal rule set to prevent the CursorAI tools to work with the code quickly and efficiently. When I prompted it to cleanup an implemetnation the tool sayd it can work with 350 lines of code at a time. What are best practices for building lest say “react” applications so I dont get myself way outside of the tools capabilities?
Any advice would be great.

I have auto mode working on projects with millions of lines of code. Code size doesn’t matter, you just need practice, “guiding” the LLM’s.

1 Like

Ive always heard to keep each code project file..like under 800. That is interesting. do you have any recommending reading, .cursorrules or any helping materials to work better with Cursor? Thank you!

Copy paste into chat the section from big files you need to work on.

My core logic files I max out at 800 lines, I have lots of callable utilities that are 200 - 300

I installed MSYS2 to get access to a bunch of unix utils that make agent more effective on a Windows platform - MSYS2

Legacy codebases are huge… Dont build it that way if you dont have too, create reusable code modules if you can

Honestly cursor itself is going to be the best teacher, because a lot of what will work better depends on your exact projects. But the core principle is LLM’s hallucinate, code doesn’t so use cursor to make a bunch of python tools to allow cursor to do things better. Also documentation is king. Everything should be documented to the degree that you might want to have more documentation than code.

Craft your documentation in a hierarchy of Cursor Rules .mdc files and use thorough descriptions with Intelligent State selected and any model will find its way pretty fast around your codebase - I am 25 - 30 percent documentation versus code

Cursor models have a great semantic search tool. Thanks to it, if the model knows how to use it, they independently and very quickly get the context from the entire repository.

As for file length, it depends on the project. It’s generally considered good practice to keep scripts under 500 lines. But I’ve got PyQt GUI app that doesn’t need splitting, and its logic takes up around 2200–2600 lines. I also recently messed around with a project where one of the PyTest scripts ended up being around 800 lines.

And when you combine Cursor Semantic Search with Agent Docstrings (use with caution), the long-file problem should get noticeably easier to handle.

I just noticed about this week where I was building a contract management react app and it was just working beautifully flawlessly. Then I got it to about midweek and then all the sudden the model just started being super inefficient. It could be because I’m in auto mode and it’s constantly picking a less optimal or a lower quality model. But just to give you an example, one of my screens has an AG grid with the first column with check marks and all I’m trying to tell her to do is to ensure that I can click those check marks and it keeps trying to resolve that. It seems to be a very simple ask check mark or uncheck mark by clicking on the checkbox. And their days where I’m spending literally a day trying to tell it a very simple thing. I tried to for the most part document especially when cursor runs into some major issues. So I tell it to document the issue and the fix. Plus I try to have cursor rules. But the longer I use the model, the quality the grades and it’s sometimes abruptly. There’s times where I’m surprise how good I understands what I’m trying to tell it, but then simple tasks sometimes take me a day to fix. If you guys have any tried and true settings, solutions or any additional software that would be great. Any type of help that would be great and I can keep on adding that to my knowledge base.