CRITICAL BUG: Git diff context is sent repeatedly with every message, wasting 10-15k tokens per interaction

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

The agent mode automatically includes git_diff_from_branch_to_main in the context of every single message in a conversation. This diff is not cached or deduplicated — it’s sent in full with each user message.

Problem:
• If you have a branch with ~200-300 lines of changes, this adds 10-15k tokens to every message
• In a 10-message conversation, that’s 100-150k tokens wasted on duplicate information
• The AI already saw this diff in message #1, there’s no reason to resend it unchanged in messages #2-10
• This significantly increases costs and reduces effective context window

Workaround:
Currently the only workaround is to work on a branch that’s in sync with main, which defeats the purpose of the feature.

Steps to Reproduce

  1. Create a feature branch with 200+ lines changed from main
  2. Open Agent mode and send any message
  3. Look at the context — you’ll see <git_diff_from_branch_to_main> with the full diff
  4. Send another message — the exact same diff is included again
  5. Repeat — every message includes the full diff

Expected Behavior

• Cache the diff and only resend if it actually changed
• Or provide a setting to disable automatic git diff inclusion: cursor.agent.includeGitDiff: false
• Or at minimum, send a hash/reference instead of full content

Operating System

MacOS

Version Information

Version: 2.4.22
VSCode Version: 1.105.1
Commit: 618c607a249dd7fd2ffc662c6531143833bebd40
Date: 2026-01-26T22:51:47.692Z (3 days ago)
Build Type: Stable
Release Track: Default
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Darwin arm64 24.6.0

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey @ausmhdoash,

git_diff_from_branch_to_main should only be included when @Branch is explicitly added to a message.

Once included, the diff becomes part of that turn’s conversation history, so the model will see it in the prompt context for future turns. However, for most models, these tokens should be written to the prompt cache, making them relatively inexpensive to read again.

If @Branch is somehow persisting or re-attaching automatically, that would be unexpected behavior we’d want to investigate! I would need to see a Request ID with Privacy Mode disabled to dig in more