Opus 4.7 "blocked under Anthropic's Usage Policy

Hello,

This morning trying to use the new Opus 4.7 model, but getting the error message “We are unable to complete this request because it was blocked under Anthropic’s Usage Policy.” with every action I take. I had been working this project for a few weeks using Opus 4.6 with no issues. This is an internal software dev project and I can’t think of any reason this would be against policy.

I am also unable to switch back to using Opus 4.6, so I’m at a road block.

Appreciate the help.

1 Like

got the same problem

random ““We are unable to complete this request because it was blocked under Anthropic’s Usage Policy.””

This is what safetyism looks like. The lockdown on Anthropic’s models will only get more strict as the models become more performant. Anthropic will certainly try to mitigate false positives, but honestly this is a very hard problem to solve. Your best option is to use models from a provider which takes a different view on the balance between safety and individual liberties.

dunno what safety im working on a preact web calculator its clearly a bug

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

“We are unable to complete this request because it was blocked under Anthropic’s Usage Policy.”

Every interaction with opus 4.7 in cursor ide receives this response

Steps to Reproduce

use opus 4.7

Expected Behavior

oh ffs

Operating System

MacOS

Version Information

Version: 3.0.16
VSCode Version: 1.105.1
Commit: 475871d112608994deb2e3065dfb7c6b0baa0c50
Date: 2026-04-09T05:33:51.767Z
Layout: glass
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Darwin arm64 25.3.0

For AI issues: which model did you use?

opus 4.7

Does this stop you from using Cursor

Yes - Cursor is unusable

2 Likes

I Have same issue:
Request ID: e82c696d-caf1-46ad-8d75-89335cc747f5
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Request blocked by Anthropic”,“detail”:“We are unable to complete this request because it was blocked under Anthropic’s Usage Policy.”,“isRetryable”:false,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:true}
Request blocked by Anthropic We are unable to complete this request because it was blocked under Anthropic’s Usage Policy.
NLi: Request blocked by Anthropic We are unable to complete this request because it was blocked under Anthropic’s Usage Policy.
at Bz_ (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:28907:24552)
at Nz_ (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:28907:23543)
at Wz_ (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:28908:6487)
at h6u.run (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:28908:11285)
at async vDn.runAgentLoop (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:41216:11960)
at async zkd.streamFromAgentBackend (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:41286:12151)
at async zkd.getAgentStreamResponse (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:41286:18486)
at async B3e.submitChatMaybeAbortCurrent (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:29014:16809)
at async Ma (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:40269:4230)

1 Like

@deanrie is there some way to report bugs directly to anthropic? they seems to have walled themselves into a big “we’re too important to listen to you” secret garden. the final prompt which generated this idiot error message before i gave up and switched to gpt was “hello!”. the project i was working on is a work scheduler for an industrial assembly line.

1 Like

Hey, here’s some context on this error.

The message “blocked under Anthropic’s Usage Policy” comes directly from Anthropic’s own content moderation. Cursor is just passing it through, and we can’t influence the filter decision on our side. In Opus 4.7, the cyber safeguards are stricter than in 4.6, so the same project or prompt that used to work can get flagged on the newer model. Sometimes this happens even for harmless messages since the filter looks at the full chat context and the codebase.

What you can do:

  • Switch to another model, like Opus 4.6 or GPT-5
  • If you’re sure this is a false positive for a legitimate security or research project, Anthropic has a Cyber Verification Program for cases like this. Details and the request path are here: Real-time cyber safeguards on Claude | Claude Help Center. If Cursor shows a link to a form in the error, that’s the right channel.

@caseys, about “I can’t switch back to Opus 4.6”, can you share what exactly happens? Do you see an error, is the model missing from the list, or does the change not stick? A screenshot would help us debug it.

I think the problem is on cursor’s side.

The problem comes when agent want to ask a question or open a file like the plan. So that seems to be with the reaction cursor vs opus and not from opus alone.

hitting the same issue, :frowning:

Sharing my experience. I finally made a account in the forum just for this.

Opus 4.7 was right in blocking my request. It was badly communicated. In the project I am working, it could be interpreted as if I was trying to add negative bias to a system.
As I get more relaxed during my sessions (when everything is working), I may sometimes communicate incrementally poorly what I want and explain less why.

I was able to make it understand better why I wanted to do that, giving a example.
In my specific case, I had to provide a example that explained that I wanted to actually do the opposite. I even described a real life scenario where the ‘no-bias’ approach caused real damage.

This was enough to align with the policy, apparently.

It may look like a bad thing at first, but I noticed improved performance (on output quality) once it understood the why.
May feel like a annoying thing to deal with… however, it is a fact that if the outcome will improve the model in future training, the flagging of ‘non-compliant’ vs ‘compliant’ may actually work in our benefit: there’s clearly a layer ‘judging’ the requests, so it is clear that there’s a ‘good result’ for this judgment that is bound to influence positively the output (since ‘bad judgment’ = “request blocked mid thinking”, one can assume ‘good judgment’ = “continue thinking, this is compliant”)

TL;DR
Do not give up and explain why you are asking what you are asking, while being mindful to explain your good intentions.

Thanks Dean. User error on the 4.6 topic. I got that figured out. Regarding the safeguards on 4.6, understood that its Anthropic’s policy and they’re just passing it through cursor. They’ve obviously got some calibration to do. Re: the user who says that we need to explain our good intentions - the prompts I am talking about are so incredibly innocuous I don’t even know what I could say to provide further clarity on policy adherence.