An Idiot's Guide To Bigger Projects

My .cursorrules have proven highly effective, particularly for inline code documentation. I use them to define primary development objectives, organizing multiple guidelines for specific technical rules and reminders across key categories as follows:

CODE ARCHITECTURE & DESIGN:
ERROR HANDLING & SAFETY:
CONFIGURATION & INTERFACES:
PERFORMANCE & OPTIMIZATION:
CODE QUALITY:
INLINE DOCUMENTATION:

The rules are very concise yet comprehensive: a total of 397 words across 35 lines (2,749 characters). This compact, well-structured format makes them easy for LLMs to interpret and follow.

For broader context, I also periodically create .md documentation files with the LLM’s assistance. These files integrate:

  • Inline code documentation
  • Detailed module documentation summaries (from module file headers)
  • Overall system architecture

The LLM maintains and updates module documentation automatically whenever files change.

This approach is particularly effective because:

  1. Components are kept manageable in size
  2. Documentation is consistently maintained
  3. The aggregate file can be extracted and fed back to the LLM for project-wide understanding

With Cursor version 44.8, this systematic .cursorrules approach has improved significantly, especially regarding iterative debugging in composer. The combination of .cursorrules and automated documentation maintenance has created a robust, self-documenting development environment.

I do my best not to let any one file become too large. My current project code statistics are:

=== Code Analysis ===
Ran analysis on Fri Dec 27 16:26:50 PST 2024

=== Project Summary ===

Directory Statistics:
Core Logic: 6030 lines in 15 files (avg: 402.00 lines/file)
Utils: 4114 lines in 20 files (avg: 205.70 lines/file)
Config: 2355 lines in 13 files (avg: 181.15 lines/file)

Total Project: 12499 lines in 48 files (avg: 260.39 lines/file)

2 Likes

That’s great to know, thanks for sharing! Your approach sounds very close to mine, and it sounds like you’re getting similarly good results, with the obvious difference that you’re putting your rules into .cursorrules.

As I’d mentioned previously, my experience of it goes back to much earlier versions where I don’t think the rules file was being used effectively. It sounds like that’s turned around of late, so that’s really useful to know. Last I checked the docs didn’t really mention using it with Composer, so that’s maybe a potential area for improvement if it hasn’t been updated very recently.

I’d still be very interested to hear from Team Cursor whether there’s anything different about using the rules file vs. notepads or context-included doc files. Does it do anything magical with that content (aside from adding it wholesale to the context)? That would be really helpful to understand.

4 Likes

Just so I understand correctly, are you saying you intentionally keep a smaller/denser/concise .cursorrules and then keep additional .MD files with additional information?

So, would a scenario be then that when you’re writing tests, you’d automatically have the .cursorrules pulled in and then you reference a specific testing .MD file that has additional rules?

2 Likes

yes, that helps to develop a good foundation. key is to build as if you have to hand over your project to a new composer at a point down the road where you least expect it, at a point where youre so deep in the middle of a dependency correction that its impossible to ever get the next composer on track, until its to late now that composer boots you, and then you break your laptop lol
here is my cursor rules.. may help someone, and in the attachment i have my docs .md files that i try and have created before anything is built. the less code you ask for the better its understands your goal, efficiency and minimal coding is the best way to design around. as youll see in my docs files the redundancy still takes place lol

##never not for any reason use placeholders, dummy links, dummy keys. anything that needs user input should be asked to retrieve before moving on from that section

never assume the user is more informed on knowledge of structure or software available to create the project .Define Success Criteria: Establish measurable outcomes and clear benchmarks for completion. This ensures both parties have a mutual understanding of what constitutes a successful result.

nothing will be created until you have brainstormed with the user where you see better implementations to interject with. you are being given control and that means its ok to tell the user what theyre doing wrong. Present Multiple Options: Offer different implementation paths where possible, with pros and cons for each. This allows the user to make informed decisions.

once brainstorming has been vetted and fully agreed up the path will be created and you will have the user ok that

MOST IMPORTANTLY you are going to create a full “hand off” documentations file(S). the importance… This helps track progress and revert to previous states if needed.Comprehensive Metadata: Include all relevant details such as: Programming language(s) used

File names and paths
Dependencies and libraries
Software versions and configurations
Account credentials (stored securely and separately, if applicable)
Clear Milestone Descriptions: Document each milestone with a summary of the purpose, changes made, and next steps.

Code and Implementation Quality Modularity: Ensure code is broken into reusable, modular components. This facilitates easier updates and maintenance.

Audit Trail-Maintain a log of all significant decisions and changes made during the project for transparency and accountability.

Self-Checkpoints
After completing each major task, the LLM should perform a self-review to verify that the outcome aligns with the defined goals.

coding files should be well compartmentalized and sectioned so that it is easy to recall where a certain snippet is placed.

5 Likes

thanks

2 Likes

@three

I think they’re both treated the same at the end of the day, with the exception of Cursor “baking” rules into responses based on their specific glob patterns or descriptions and being able to set them globally

otherwise, I’ve found that agents will still pull rules and best practices from other files within the codebase, if they’re written or commented in a way that makes it clear it’s for AI usage

Great points about TDD in the article btw

I agree that the more complex the required code is, the more important it is to write tests first.

It’s so much easier to make code adhere to a test than it is to write a test for broken code while ensuring you’re not making any wrong assumptions in your test based off that broken code.

I was able to get so much more mileage out of AI when having it write tests first, then code and iterate until completion.


@ryanoZphoto

having tried something similar I found it to only increase token usage and overhead on top of the canonical checkpoint system. I’ve had much better results by providing a short rules file about good git hygene and having the agent use git instead.

same here, more token overhead. while it can help to keep things on track vs. expanding context indefinitely, I’d say you’re better off having an initial set of rules that laser-focuses the model in on tasks and prevents the need for this in the first place.

the other points, especially the ones about modularity are very good advice

1 Like