Hi Cursor team & folks! How can we tweak our codebase to help Cursor with code completion and generation. Any templates or advice for .cursorrules? Should one identify key files (like READMEs, decision records, documentation, dependency & version files, and end-user test scenarios) to enhance model performance? Can we in them (or should we duplicate their content to .currsorrules?)
Can you publish a simple test harnesses that we can experiment with?!
I saw some posts about folks with .cursorrules suggestions, but how do we know what really works? Not sure we have any official, vetted or measurable results.
What size project (number of files, lines, or “tokens”) is too big for cursor to have context on. Would appreciate some tips here…
We work in an in-house DSL. Firstly I ingested the documentation for the DSL so that it is available with @Docs. Then I start with an empty project folder and asking the Composer to “Create a patient management application” I then look at the output and provide instructions in .cursorrules like I would talk to a junior developer using markdown syntax:
Directory Structure
Models: model/*.mez
Views: web-app/views/*.vxml
Presenters: web-app/presenters/*.mez
Services/APIs: services/*.mez
Reports:
jasper-reports/*.jrxml
builtin-reports/*.jrxml
SQL Scripts (PostgreSQL): sql-scripts/*
Language/Translation File: web-app/lang/en.lang
Model Files (.mez)
Model files contain the data model definitions using the .mez extension. They define data validators, enum types, custom objects and relationships between objects.
Defining Objects
Persistent objects have a .save() method (with no arguments) for saving a new object or updating an existing object.
Objects can be defined as persistent (backed by database) or non-persistent (in-memory only):
//person.mez
persistent object person {
string name;
int age;
}