Opus 4.5 Max doesn't appear to do anything different

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

There’s no difference using Opus 4.5 Max or just regular Opus 4.5.

Steps to Reproduce

Use OPUS 4.5 Max.

Expected Behavior

Increased context window.

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.1.50
VSCode Version: 1.105.1
Commit: 56f0a83df8e9eb48585fcc4858a9440db4cc7770
Date: 2025-12-06T23:39:52.834Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 25.1.0

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. Max Mode is mainly intended for models with a context window larger than 200k tokens - like Gemini 3 Pro, GPT 5.1, and Grok 4.

For Claude Opus 4.5, the standard context window is already 200k tokens, so Max Mode might not provide a noticeable increase.

A few clarification questions:

  • What behavior did you expect from Max Mode? For example, the ability to include more files in context?
  • Did you test the same requests with and without Max to compare?
  • Do you see differences in speed or cost?

You can read more about Max Mode here: Models | Cursor Docs

All it seemed to do is make it run a lot slower.

Also have you noticed there’s been a significant slowdown with all the Anthropic models within Cursor over the past day or so?

It’s only a cursor issue as they’re still lightning fast within Claude code and anti-gravity.

Is this a known issue you guys are working on?

Thanks for the update. Regarding the Anthropic slowdown in Cursor, this isn’t a confirmed widespread incident yet.

We need data to check routing and network:

  • In Cursor Settings → Network → Run Diagnostics, then share a screenshot of the result
  • Repeat the same prompt in Cursor with and without Max
  • Share 2–3 Request IDs from slow Anthropic calls: chat menu in the top-right → Copy Request ID

Please share these details and I’ll pass them to the engineers.

I’ve got a repeatable issue with Anthropic models in Cursor during the “planning next moves” phase.

  • It will sit on “planning next moves” for ~5 minutes at a time.

  • The actual code execution / implementations are fast; the slowdown is only between steps when it goes back to “planning next moves”.

  • For some prompts this happens 3–4 times in a single run, so one prompt can take 15–20 minutes end to end.

This only started in the last 24–48 hours.

Scope of the problem:

  • Only in Cursor. The same Anthropic models behave normally in AntiGravity and Claude Code.

  • Only Anthropic models. Other models in Cursor (e.g. GPT-5.1) are “as expected” and very fast.

  • It’s not a networking issue on my side:

    • DNS ~3–4 ms

    • Connect/TLS to api2.cursor.sh ~80 ms each, total ~250 ms

    • Pings ~80–260 ms

    • Health checks to your API all return 200 and complete quickly.

So this looks like something specific to how Cursor is calling Anthropic for the planning phase, not my connection or machine. Please check whatever internal calls you’re making during “planning next moves” for Anthropic models over the last 24–48 hours.

Thanks for the details.

This is a known issue we’re tracking here:

Team is already working on the fix.

I fixed it by rolling back to an earlier version of Cursor Version: 2.0.11

So it must have been an update you put out last weekend.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.