Hey! Great request. Here are proven strategies from the community:
The main approach is to split the project into phases:
- Research: explore the codebase
- Plan: plan the architecture
- Execution: implement
Save each phase in a separate Markdown file so you can reuse the context later.
Built-in Cursor tools:
/summarize: force a summary of long chats- Checkpoints: automatic snapshots of changes for rollback
- Rules (
.cursor/rules): reusable instructions for project standards - Summarization | Cursor Docs
- Rules | Cursor Docs
Model choice:
Main recommendation: use Opus 4.5 only for critical tasks like docs and deep refactors. For day to day work, use GPT-5.2 XHigh or GPT-5.1 Codex Max XHigh.
MCP tools:
- To reduce token costs via caching: High AI API Costs? Try this MCP Project Standards Tool! v5.0.0 Out Now: Slash Token Usage via Context Caching & Cut Costs by up to 90%!
- MCP docs: Model Context Protocol (MCP) | Cursor Docs