Web lookups now require authorization per site?

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Previously when the LLM would want to view a web page you’d have to authorize the web fetch, and it would go and do its thing. I added web fetching to the allowlist and that worked great. In a recent release it appears the allowlist is now on a website by website basis which is pretty useless. Can we get a fix please?

Thanks!

Steps to Reproduce

have the llm fetch a doc and see “add url to allowlist”

Expected Behavior

allow all web access or not

Operating System

MacOS

Version Information

Version: 2.4.23
VSCode Version: 1.105.1
Commit: 379934e04d2b3290cf7aefa14560f942e4212920
Date: 2026-01-29T21:24:23.350Z
Build Type: Stable
Release Track: Early Access
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Darwin arm64 24.6.0

For AI issues: which model did you use?

opus-4.5

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report.

The per-URL allowlist is intentional. It gives you precise control over which sites the agent can access automatically. From your description, it sounds like you want full auto-run for browser tools.

Try switching to Auto-run in Cursor Settings > Agents > Auto-Run. In this mode, all browser actions will run right away without asking for confirmation.

That said, please be careful with auto-run on untrusted sites, since the agent can navigate anywhere and send data without confirmation.

If you want a middle ground, like a template or wildcard support for the allowlist, that would be a feature request. Let me know if auto-run works for you or if you need something else.

Ah super interesting, good call on the data-exfiltration problem. I wonder if there is a way to classify a site as “trustworthy” such as .gov sites, or high use company domains (major manufacturers, etc.. etc..). Perhaps it could assign a trust score to a domain and let us specify where its at. E.g. if its trying to reach google versus sometinyrandomdomain.com

Or perhaps also to protect the url itself from having data inserted into it. Following a link versus having the LLM augment a followed link