Anthropic just announced 1M context GA at standard pricing for Opus 4.6 & Sonnet 4.6, when will Cursor reflect this?

Anthropic announced today (March 13, 2026) that the full 1M context window is now generally available for Claude Opus 4.6 and Sonnet 4.6 at standard API pricing with no long-context premium:

  • Opus 4.6: 5/5/25 per million tokens – same rate across the entire 1M window

  • Sonnet 4.6: 3/3/15 per million tokens – same rate across the entire 1M window

  • No multiplier: a 900K-token request is billed at the same per-token rate as a 9K one

  • No beta header required

  • 1M context is now included in Claude Code for Max, Team, and Enterprise users on Opus 4.6 by default

Blog post: https://claude.com/blog/1m-context-ga

However, the Cursor Models & Pricing page still shows “The cost is about 2x when the input exceeds 200k tokens” for both Claude 4.6 Opus and Claude 4.6 Sonnet.

A few questions:

  1. Will Cursor update its pricing to reflect Anthropic’s new standard 1M pricing (no 2x multiplier above 200k)?

  2. Will the 1M context window become the default for Opus 4.6 and Sonnet 4.6 without requiring MAX Mode?

  3. For users on Cursor’s Max/Team/Enterprise plans, does the Claude Code 1M context default apply automatically since Anthropic says it’s now included?

This is a significant cost reduction for anyone working with large codebases or long agent sessions. Would love to hear from the Cursor team on the timeline for adopting this change.

2 Likes

Thanks for flagging! We’ve already updated our pricing to reflect Anthropic’s new standard 1M rates, which no more 2x multiplier above 200k.

The full 1M context window is available through Max mode, consistent with how we handle long context for other models.

6 Likes

Awesome! Thank you nice people at Cursor

This is great news! Now that the 2x multiplier is gone, any chance you could bump the default context limit (before needing Max mode) to something like 400–500k? Other models like GPT-5.4 already have higher default limits, and Claude models tend to hold up really well with longer contexts.

1 Like

Why is MAX mode required for 1M context here? Just to gatekeep request based plans from having 1m context included? Fair enough if so, just want clarification.

Hey all!

At the moment, extended context is staying behind Max Mode.

Request-based plans (Team/Enterprise) can still access 1M context, they just need to enable a different model key under Cursor Settings > Models (claude-4.6-opus-high-thinking-1m).

I know that an undocumented model key isn’t a great answer. We are working on a new model selector that should make this much easier!