Hey everyone,
I’ve been trying to systematically reduce our team’s Cursor costs and build internal guidelines for efficient usage — but I’m stuck with one issue I just can’t optimize away.
What’s happening:
Even after doing everything right:
-
Added both
.cursorignoreand.cursorindexingignoreto excludenode_modules,dist,public, analytics, UI folders, etc. -
Kept only relevant files indexed (about 785 files total).
-
Scoped edits using precise
@filereferences. -
Using Plan Mode and Rules files to control context.
Still, even for a very small change (1–2 functions in 1 file), I’m seeing 141,000+ cache reads — and my $20 subscription burns out in barely a week.
My concerns / questions:
-
Why are cache reads so high even when indexing and context are limited?
-
Is Cursor re-processing large chunks of the repo every time a file is touched?
-
Would disabling indexing altogether reduce these reads, or would that break the Agent’s performance?
-
Any way to cap or monitor which files are causing these cache hits?
-
Are there internal flags/settings to tweak how frequently embeddings refresh?
Context:
I’m preparing team guidelines on using Cursor efficiently — especially around cost management and responsible AI use.
So I’d love some clear direction or best practices that actually help control cache reads without losing core functionality.
If the team can share any technical explanation or workflow insight, it’ll really help us set standards internally and get the most out of our paid plan.
Thanks in advance!
— Shruti
Software Engineer (MERN Stack)