Cursor 1.2 Impressed me

Hi,
use this after you have changed it to suit your needs.
Save the content below as a .mdc file under /your-project/.cursor/rules/


description:

globs:

alwaysApply: true

-–

## ###!EPIPE (Execution Instruction Set)

Pipeline Steps:

#### 0. Communication

* Focus on the **core ideas**.

* Use **short phrases**, not full sentences.

* **Avoid redundancy**.

* Keep it **sharp and structured**.

* Respond like an **analyst preparing briefing notes**.

* Follow SOLID, DRY & SRP principles.

-–

#### 1. Sanity Check

* Simplify the task/question.

* Validate logic, coherence

* Detect contradictions, bias, paradoxes β†’ pause, request clarification

* Break down complex requests β†’ smaller sub-tasks

* **Default to Minimum Viable Product (MVP): simplicity is non-negotiable**

-–

#### 2. FOLLOW IN STRICT ORDER.

1. Analyze request β†’ extract *clear, minimal* requirements

2. Research options β†’ favor *low-complexity, high-impact* paths

3. Develop concise solution plan β†’ **MVP > full featureset**, avoid overengineering

4. Save plan β†’ `./docs` (DOC-FILE) β†’

* File name **must** begin with:

 \`\`\`bash

date +β€œ%Y%m%d_%H%M%S”

 \`\`\`

 Example: \`20250626_172333_plan.md\`

5. Save tasks β†’ `./tasks` (TASK-FILE) β†’

* File name **must** begin with same format:

 \`\`\`bash

date +β€œ%Y%m%d_%H%M%S”

 \`\`\`

 Example: \`20250626_172333_task.md\`

* **Task files must contain**: full path to associated plan file

* **Plan files must contain**: full path to associated task file

* Both paths must be at top of file for easy cross-referencing

6. **Implement with Integration-First Testing** β†’ Tests must catch real bugs, not just pass

7. **Tests FAIL when code is broken** β†’ Use real database, real functions, real flows

8. Check off completed tasks β†’ TASK-FILE

9. All tests pass β†’ accept

10. **Before marking DONE β†’ re-run all relevant test suites (Python + TS) β†’ must pass**

11. Use MCP tools when needed

12. Update DOC-FILE β†’ list created/modified files

13. Repeat if scope grows β€” but **never violate MVP-first rule**

-–

#### 3. MCP Tool Usage

* Use MCP tools as needed β€” never prematurely

* MCP use must improve speed, clarity, or test coverage

-–

#### 4. Function Size Rule

* Functions <50 lines β†’ must be focused and testable

* One job per function β†’ no blending of concerns

-–

#### 5. File Size Rule

* Files <500 lines β†’ modularize aggressively

* Extract logic β†’ utils, features, modules

* **Smaller files > clever abstractions**

-–

#### 6. Abstraction & Centralization Rules

* DRY > WET β†’ refactor duplication *only if repeated 2+ times*

* Centralize:

* Config

* Constants

* Logging

* Errors

* Data access

* Use inversion of control β†’ dependency injection preferred

* Interface-driven design β†’ enables modular swapping

* Favor composition > inheritance

* Abstract API/services β†’ use adapters/clients/gateways

* Keep decisions **reversible** β†’ avoid early commitments

-–

### :test_tube: 7. Test Strategy (Python + TypeScript)

**CRITICAL**: Tests must catch real bugs, not just provide green checkmarks

1. **Integration Tests First**: Test real flows with real database objects

2. **Mock Only External Services**: APIs, file system, network calls - NOT internal functions

3. **Real Database Over Mocks**: Use neo4j-test container instead of mocking Neo4j operations

4. **Test Real Error Conditions**: Use actual failed database calls, not mocked exceptions

5. **Split by concern, not size**: error-handling, happy-path, edge-cases

6. **Location**:

* TypeScript β†’ `/code/app_v2/static/src/_tests_/`

* Python β†’ `app_v2/tests/` or `tests//`

7. **Watcher**: Test runner/watch mode must always be active

8. **Quality over Coverage**:

* **A test that doesn’t catch real bugs is worthless**

* **If your test wouldn’t have caught the last 3 bugs you fixed, rewrite it**

* No silent test regressions β€” test coverage must not shrink

9. **Python Test Environment Rule**:

* Always run tests with test settings:

 \`\`\`bash

DJANGO_SETTINGS_MODULE=FITS.settings._test_settings python -m pytest path/to/test.py::TestClass::test_method -v

 \`\`\`

* Never run tests against live config or production DB

* Run E2E tests

 \`\`\`bash

DJANGO_SETTINGS_MODULE=FITS.settings._e2e_settings python ./e2e/run_tests.py allx

 \`\`\`

10. **use --bail for .ts tests. use -x for .py tests**

11. **Test Database > Mocking**: Prefer real objects in test DB over complex mocking

12. **Red-Green-Refactor**: Write failing test β†’ Make it pass with real code β†’ Refactor

13. **Test the Integration Points**: Database calls, API calls, cross-module dependencies

14. **After a successful !EPIPE cycle**: If you gained important insight that may help future development, you should update relevant rules.

### Web Server is always running on port 80. No need to spawn a new server.

### β€œnpx webpack --config webpack.dev.js --watch” is always running in the background

There is a small trigger in the above rule. you must initiate your chat with β€œ!EPIPEβ€œ.

like:

!EPIPE Do this and this …..

You can change the trigger as you like

1 Like

Really???