Rules. They are fundamentally critical to getting Cursor Agent and an LLM to build good quality code.
Further, rules shouldnât just be a few lines, a little blurb here or there. Rules need to be comprehensive, nuanced, detailed. Because otherwise, you end up with LOOPHOLES all over the place. I started with simple rules, but very quickly my rules became like the below example, after a few cycles of having the Agent itself, check my rules for loopholes and close them.
This rule has been iterated in three times I think. It was originally about 25 lines. Its been modified by the agent itself (mostly Sonnet and now Grok Code) each time I investigate for loopholes (which is usually right after the LLM seems to find one and use it!)
With rules like this, I have just about totally corralled the agent, with respect to the topic of each rule. In this case, unit testing. It covers not just how to test, but exactly how, exactly how to run CLI commands to perform testing, examples of good usage vs. bad, MANDATORY requirements (these become critical, when you really want the agent to DO EXACTLY what you require it to do!)
RULES. They are not a passing thought. They arenât simple or light weight. They are the infrastructure that GOVERNS the agent and llm! Rules will generally come out of you running into the agent doing things you do not like, and over time, as you work on keeping the agent from doing more and more of those things, your rules WILL, and SHOULD, grow beyond a simple 3-line blurb. With rule files like these, I no longer have issues with unit testing, integration testing, design for things like data service layer, business logic layer, api service layer, api controller layer, support service types, etc. I am still working on code style and such, as those are often so extensive.
KEY FACT: The agent and llm, they donât really know anything! Not about what YOU WANT. They know the fundamentals. They do not know any specifics. UNLESS YOU TELL THEM! Rules tell the agent and llm, WHAT YOU WANT. WHAT YOU EXPECT. WHAT YOU REQUIRE. WHAT YOUR EXPECTATIONS ARE.
Tell the agent! RULES!
KEY FACT: The agent and llm, do NOT have any actual intelligence! An LLM is a highly advanced knowledge base with human-like interactive capabilities. They posess no intelligence! BYOI: Bring Your Own Intelligence! Together the agent and llm can behave semi-intelligently, but they again cannot really behave exactly correctly, without YOUR INSTRUCTION. BYOI, and deliver the intelligence the LLM lacks.
RULES. They rule!
NOTE: The .mdc file below the next line is a SINGLE rule file!
=== unit-testing.mdc ===
---
description: Comprehensive unit testing guidelines covering test creation, execution, fixing, and best practices
globs: *.test.ts,*.test.js,*.spec.ts,*.spec.js,*.integration.spec.ts,*.integration.spec.js
alwaysApply: false
---
Comprehensive Unit Testing Guidelines
Activation Triggers (Apply Intelligently)
The assistant should automatically apply this rule when working with tests. Triggers (case-insensitive):
- Keywords:
test
, spec
, unit test
, integration test
, mock
, stub
, jest
- Test files:
*.spec.ts
, *.test.ts
, *.integration.spec.ts
, *.timing.spec.ts
- Phrases:
run tests
, fix tests
, create tests
, write tests
, test coverage
- Actions:
testing
, mocking
, stubbing
, test failure
, test broken
Targeted Test Execution (MANDATORY)
CRITICAL: Use direct Jest commands for targeted testing. NEVER use npm test
for specific file testing.
CRITICAL PARAMETER WARNING
NEVER USE THESE INCORRECT PARAMETERS:
# WRONG - Will run full test suite (1300+ tests)
npx jest --testPathPatterns="pattern" # â Plural form is INVALID
npx jest --testNamePatterns="pattern" # â Plural form is INVALID
npx jest --testRegexs="pattern" # â Plural form is INVALID
ALWAYS USE THESE CORRECT PARAMETERS:
# CORRECT - Will run targeted tests
npx jest --testPathPattern="pattern" # â
Singular form is CORRECT
npx jest --testNamePattern="pattern" # â
Singular form is CORRECT
npx jest --testRegex="pattern" # â
Singular form is CORRECT
VALIDATION CHECK: If your Jest command runs more than ~50 tests when you expect to run 1-5 tests, you are using INCORRECT parameters.
Parameter Validation Methods
Before executing any Jest command, VERIFY:
-
Check parameter spelling:
# Quick validation - list tests that would run (doesn't execute tests)
npx jest --listTests --testPathPattern="asset-data.service"
-
Expected test count verification:
- Single file: Should show 1 file
- Module directory: Should show 2-10 files typically
- If you see 50+ files, you have the wrong parameter
-
Use shell aliases to prevent mistakes:
# Load safe aliases (prevents typos)
source .jest-aliases.sh
# Use safe aliases instead of typing parameters
jest-pattern="asset-data" # Safe: uses --testPathPattern
jest-unit # Safe: correct unit test pattern
Why Direct Jest Commands
The npm test
script often has pre-configured filters that conflict with additional patterns:
"test": "jest --testPathIgnorePatterns='.*\\.timing\\.spec\\.ts$' --testPathIgnorePatterns='.*\\.integration\\.spec\\.ts$'"
Direct Jest commands provide precise control and prevent running thousands of tests unnecessarily.
Targeted Execution Commands
Single File Testing:
# Test specific file
npx jest src/domains/entity/entity-data.service.spec.ts
# Test with verbose output
npx jest src/domains/entity/entity-data.service.spec.ts --verbose
# Test with coverage for specific file
npx jest src/domains/entity/entity-data.service.spec.ts --coverage --collectCoverageFrom="src/domains/entity/entity-data.service.ts"
Multiple File Testing:
# Test multiple specific files
npx jest src/domains/entity/entity-data.service.spec.ts src/domains/entity/entity-orchestration.service.spec.ts
# Test entire directory
npx jest src/domains/entity/
# Test by pattern
npx jest --testPathPattern="assets.*\.spec\.ts$"
Test Suite Targeting:
# Test specific describe block
npx jest --testNamePattern="EntityDataService"
# Test specific functionality across files
npx jest --testNamePattern="soft deletion"
# Combine file and test name patterns
npx jest src/domains/assets/ --testNamePattern="createOrUpdate"
Test Type Filtering:
# Unit tests only (exclude integration/timing)
npx jest --testPathIgnorePatterns=".*\.integration\.spec\.ts$" --testPathIgnorePatterns=".*\.timing\.spec\.ts$"
# Integration tests only
npx jest --testPathPattern=".*\.integration\.spec\.ts$"
# Timing tests only
npx jest --testPathPattern=".*\.timing\.spec\.ts$"
When to Use Full Test Suite
ONLY use npm test
in these scenarios:
- Final validation before completing work
- Explicitly requested by user
- CI/CD pipeline execution
- Regression testing after major changes
NEVER use npm test
for:
- Iterative development testing
- Single file validation
- Debugging specific test failures
- Performance-sensitive testing workflows
Test Creation Best Practices
Test Structure and Organization
File Naming Conventions:
# Unit tests
component.spec.ts
service.spec.ts
# Integration tests
component-api.integration.spec.ts
service.integration.spec.ts
# Timing/Performance tests
component.timing.spec.ts
Test Structure Pattern:
describe("ComponentName", () => {
let component: ComponentName;
let mockDependency: jest.Mocked<DependencyType>;
beforeEach(() => {
// Setup for each test
});
afterEach(() => {
// Cleanup after each test
jest.clearAllMocks();
});
describe("methodName", () => {
it("should handle normal case", () => {
// Arrange
// Act
// Assert
});
it("should handle edge case", () => {
// Test edge cases
});
it("should handle error case", () => {
// Test error scenarios
});
});
});
Test Naming Conventions
Descriptive Test Names:
// â Poor naming
it("should work", () => {});
it("test create", () => {});
// â
Good naming
it("should create user with valid email and password", () => {});
it("should throw ValidationError when email is invalid", () => {});
it("should return empty array when no results found", () => {});
Test Categories:
- Happy Path: Normal successful operations
- Edge Cases: Boundary conditions, empty inputs, limits
- Error Cases: Invalid inputs, system failures, exceptions
- Integration: Cross-component interactions
Mocking and Stubbing Guidelines
Mock External Dependencies:
// â Don't test external systems
it("should call real database", async () => {
const result = await realDatabaseService.findUser(id);
expect(result).toBeDefined();
});
// â
Mock external dependencies
it("should handle database user lookup", async () => {
const mockUser = { id: "123", name: "Test User" };
mockDatabaseService.findUser.mockResolvedValue(mockUser);
const result = await userService.getUser("123");
expect(mockDatabaseService.findUser).toHaveBeenCalledWith("123");
expect(result).toEqual(mockUser);
});
Sparse Object Pattern (Approved):
// â
Use sparse objects for testing - this is explicitly allowed
const mockUser = {
id: "123",
email: "[email protected]",
// Only include properties needed for test
} as unknown as User;
Mock Creation Patterns:
// Service mocking
const mockUserService = {
findById: jest.fn(),
create: jest.fn(),
update: jest.fn(),
delete: jest.fn(),
} as jest.Mocked<Partial<UserService>>;
// Class mocking with jest
const mockPrismaService = {
user: {
findUnique: jest.fn(),
create: jest.fn(),
update: jest.fn(),
delete: jest.fn(),
},
$transaction: jest.fn(),
} as jest.Mocked<PrismaService>;
Test Fixing Best Practices
Root Cause Analysis (MANDATORY)
Before making ANY changes:
- Read the full error message - understand whatâs actually failing
- Identify the source - is it test code, production code, or environment?
- Understand the intent - what was the test supposed to verify?
- Check for side effects - are other tests affected?
Investigation Steps:
# Run failing test in isolation with verbose output
npx jest path/to/failing.spec.ts --verbose
# Run with debugging information
npx jest path/to/failing.spec.ts --no-cache --detectOpenHandles
# Check if it's environment-related
npx jest path/to/failing.spec.ts --runInBand
Common Test Failure Patterns
Async/Promise Issues:
// â Missing await
it("should handle async operation", () => {
service.asyncMethod(); // Missing await
expect(result).toBeDefined(); // Will fail
});
// â
Proper async handling
it("should handle async operation", async () => {
const result = await service.asyncMethod();
expect(result).toBeDefined();
});
Mock State Pollution:
// â Mocks not cleared between tests
describe("UserService", () => {
it("should create user", () => {
mockDatabase.create.mockResolvedValue(user);
// Test runs, mock has call count = 1
});
it("should find user", () => {
// This test might fail if it expects mock to be clean
expect(mockDatabase.create).not.toHaveBeenCalled(); // Fails!
});
});
// â
Proper mock cleanup
describe("UserService", () => {
afterEach(() => {
jest.clearAllMocks(); // Clean state between tests
});
// Tests now run independently
});
Type Issues:
// â Don't use | null
interface TestUser {
id: string;
name: string | null; // Avoid this
}
// â
Use optional properties or | undefined
interface TestUser {
id: string;
name?: string; // Preferred
email: string | undefined; // If absolutely necessary
}
Test Determinism Requirements
Tests MUST be deterministic:
- Same input â Same output (always)
- No dependency on external state
- No reliance on system time, random values, or network
- No shared mutable state between tests
Making Tests Deterministic:
// â Non-deterministic
it("should create user with current timestamp", () => {
const user = createUser(); // Uses Date.now() internally
expect(user.createdAt).toBe(Date.now()); // Will fail due to timing
});
// â
Deterministic
it("should create user with specified timestamp", () => {
const fixedDate = new Date("2023-01-01T00:00:00Z");
jest.useFakeTimers().setSystemTime(fixedDate);
const user = createUser();
expect(user.createdAt).toBe(fixedDate.getTime());
jest.useRealTimers();
});
Test Validation Process
Individual Test File Validation (MANDATORY)
After making changes to any test file:
-
Run the specific test file:
npx jest src/path/to/modified.spec.ts
-
Verify build succeeds for the test:
# Check TypeScript compilation
npx tsc --noEmit src/path/to/modified.spec.ts
-
Run with coverage to verify completeness:
npx jest src/path/to/modified.spec.ts --coverage --collectCoverageFrom="src/path/to/production-file.ts"
Multi-File Validation
When changes affect multiple files:
-
Run affected test files together:
npx jest src/domains/users/ --coverage
-
Check for test interaction issues:
npx jest src/domains/users/ --runInBand --detectOpenHandles
Full Suite Validation (Final Step Only)
Only after ALL individual tests pass:
npm test
Performance and Efficiency
Fast Test Execution
Optimize test setup:
// â Expensive setup in each test
beforeEach(() => {
database = new TestDatabase(); // Heavy operation
database.migrate();
});
// â
Reuse expensive setup
beforeAll(() => {
database = new TestDatabase();
database.migrate();
});
beforeEach(() => {
database.clearData(); // Light operation
});
Avoid unnecessary async operations:
// â Unnecessary async
it("should validate input", async () => {
const result = validateEmail("[email protected]"); // Sync function
expect(result).toBe(true);
});
// â
Keep it synchronous
it("should validate input", () => {
const result = validateEmail("[email protected]");
expect(result).toBe(true);
});
Resource Management
Clean up resources:
describe("FileService", () => {
let tempFiles: string[] = [];
afterEach(() => {
// Clean up created files
tempFiles.forEach((file) => fs.unlinkSync(file));
tempFiles = [];
});
});
Error Prevention and Recovery
Prohibited Actions
NEVER do these:
- Comment out or delete failing tests without understanding why they fail
- Skip test validation steps
- Use
any
type to bypass TypeScript errors in tests
- Create tests that depend on external systems (databases, APIs, file system)
- Write tests that modify global state without cleanup
Recovery Procedures
If tests fail after changes:
- Isolate the failure - run individual test files
- Check recent changes - what code was modified?
- Verify mocks - are mocks properly configured?
- Check async handling - are promises properly awaited?
- Reset environment - clear caches, restart if needed
If build fails for test files:
- Check imports - are all imports correct?
- Verify types - do test objects match expected types?
- Check mock types - are mocks properly typed?
Integration with Development Workflow
During Feature Development
- Write tests first (TDD) or alongside implementation
- Run targeted tests frequently during development
- Use watch mode for active development:
npx jest src/path/to/work-area/ --watch
Before Committing
-
Run all affected tests:
npx jest src/affected-areas/
-
Verify no side effects:
npm test # Full suite only before commit
Code Review Preparation
- Ensure all tests have clear, descriptive names
- Verify good coverage of happy path, edge cases, and error cases
- Check that mocks are realistic and properly configured
- Confirm tests run quickly and deterministically
Best Practices Summary
Test Quality Checklist
- Tests are deterministic and repeatable
- External dependencies are mocked/stubbed
- Test names clearly describe what is being tested
- Tests cover happy path, edge cases, and error scenarios
- Tests run quickly (< 100ms each for unit tests)
- No test pollution between test cases
- Proper async/await usage where needed
- TypeScript types are properly maintained
Execution Checklist
- Use
npx jest
for targeted testing
- Individual test files validated before integration
- Full test suite run only at completion
- Build verification for modified test files
- Coverage verification for critical paths
Maintenance Checklist
- Root cause analysis before fixing failing tests
- Obsolete tests removed (only when truly obsolete)
- Test setup and teardown properly configured
- Mock state cleaned between tests
- Resource cleanup after tests complete
Integration with Other Rules
This rule works in conjunction with:
- API v2 implementation testing requirements
- Story implementation testing phases
- Commit message standards for test changes
- Code quality and architecture guidelines
- Domain data services testing patterns
Remember: Good tests are fast, reliable, and maintainable. They should give you confidence in your code without slowing down your development process.