Agent can't web search properly

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Agent using web search receives a response not to what he wanted to know and what is written in his request in the UI, but as if only a user prompt is sent.

Steps to Reproduce

Prompt like:
“Use Internet requests to update information to find out if it is outdated @TODO.md

Expected Behavior

Agent receives information based on its search query

Screenshots / Screen Recordings

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.5.10 (user setup)
VSCode Version: 1.99.3
Commit: 7ad52fff14641ec6373a31c19463856cace32640
Date: 2025-09-05T00:29:36.348Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.22631

Additional Information

RID:

  • 84245739-87be-4792-9afd-83f8098e1219
  • 958cb4eb-4f28-4d30-ac78-8eeff5bbad5d

Does this stop you from using Cursor

Yes - Cursor is unusable

3 Likes

hi @Artemonim what happens if you ask:

Search the web for up to date information and check if @TODO.md is outdated.

or

Search the web with information from @TODO.md and check if any item there is outdated.

Which model was that btw?

GPT-5. Grok Coder Fast seemed to be acting strangely, so I quickly turned him off.


Now I’ve tried your first prompt and GCF started requesting information through the pip index. The project is in Python, and TODO has an old roadmap of which libraries can be used in the project, so the approach is interesting. He didn’t use Web Search at all.

GPT-5 went to Web Search and the result turned out to be the same as in my bug report.

Gemini 2.5 Pro have the same results.

Thank you for the additional details. I have filed an issue internally so team can check it out.

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Web search tool calls return completely useless, very generic answers, causing the agent to continue searching endlessly thinking that it wasn’t specific enough.

Ideally, I expect web search to return actual websites with summaries, and the agent should be able to dig deeper by accessing web content directly too.

However, even without this, the answers by the web search tool should at least be more specific and tailored to the search query. They currently seem to ignore the search query completely.

Steps to Reproduce

Ask agent to perform some web searches on a specific detail of some framework.

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.6.14 (Universal)
VSCode Version: 1.99.3
Commit: 64b72c9cd7e38203078327f881d5fe74930b2600
Date: 2025-09-11T21:42:07.958Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.6.0

For AI issues: which model did you use?

Auto.

For AI issues: add Request ID with privacy disabled

c89d282f-168f-4354-b5f0-b40c1b7bfe80

Does this stop you from using Cursor

Yes - Cursor is unusable

1 Like

Hi, I’m having the same problem, wrote a separate post here:

https://forum.cursor.com/t/web-search-seems-broken-in-recent-cursor-versions/133448

But I think it’s exactly the same issue. Makes Cursor fairly unusable for me, as I heavily rely on web searches for my agents to be accurate when working with not-so-common frameworks.

1 Like

hi @mariomeissner thank you for the bug report.

I have connected both reports and the team is looking into it.

This important feature still broken

@Artemonim on which models do you notice the web search issue?

At least gpt-5 and Gemini 2.5 pro

1 Like

The agentic web search capability is now gone completely. Did Cursor remove it on purpose? No communication about this?

The agentic web search is so unreliable and buggy. It doesn’t work whatsoever in most of my conversations.

I mean in what universe is this even remotely functional tooling? This is embarrassingly bad. It takes 5 minutes to make a proper agentic search in this day and age. Maybe 20x that time if shipping the feature. Even 100x the time. It’s a day’s work.

zzz

Here’s Claude’s response to your search implementation - also how do you STILL not have markdown attachment support? Sigh…


Formal Complaint: Cursor Search Tool False Advertising

DATE: 2025-10-04

SUBJECT: Non-functional “agentic search” tool despite premium pricing

EVIDENCE: Conversation log with 20 failed search attempts


CLAIMS MADE BY CURSOR

Marketing materials (presumed from user statements):

  • “Agentic search” capability

  • Superior code understanding

  • Advanced web grounding

Pricing: Premium tier (exact amount not verified but stated as “exorbitant”)


EVIDENCE OF FAILURE

Test Case: Technical Documentation Retrieval

Date: October 4, 2025

Agent: Claude Sonnet 4.5 via Cursor

Task: Retrieve current API documentation for:

  1. Google Gemini 2.5 Flash models

  2. OpenAI GPT-5 release status

  3. OpenRouter API streaming format

  4. @google/genai (js-genai) SDK documentation

Search Attempts Log

| # | Query | Target | Result | Useful? |

|—|—|—|—|—|

| 1 | “OpenRouter API streaming SSE format October 2025 documentation” | OpenRouter docs | Generic LLM integration advice | :cross_mark: |

| 2 | “Google Gemini API generateContent streamGenerateContent format 2025” | Google AI docs | Generic LLM integration advice | :cross_mark: |

| 3 | “OpenAI Chat Completions API streaming SSE format specification” | OpenAI docs | Generic API design advice | :cross_mark: |

| 4-6 | Site-specific searches (openrouter.ai, ai.google.dev, platform.openai.com) | Official docs | Unrelated Anthropic safety articles | :cross_mark: |

| 7-12 | Jamboard definition searches | Terminals.tech context | Google Jamboard whiteboard product | :cross_mark: |

| 13-15 | NPM package searches (npmjs.com, github.com) | Package docs | Generic collaboration tool info | :cross_mark: |

| 16-20 | Model-specific searches (gemini-2.5-flash, gpt-5) | Technical specs | Anthropic controversy articles | :cross_mark: |

Success rate: 0/20 (0%)

Tokens wasted: ~15,000

User frustration: Extreme

Pattern Analysis

Every search returned:

  1. Generic “how to integrate LLMs” blog posts

  2. Irrelevant articles about Anthropic safety testing (repeated across ALL queries)

  3. Definitions of Google Jamboard (the whiteboard product, not terminals.tech context)

  4. No actual API documentation from official sources

Search engine appears to:

  • Ignore site-specific operators (site:ai.google.dev)

  • Ignore quoted phrases (“gemini-2.5-flash”)

  • Return same results regardless of query specificity

  • Prioritize SEO-gamed content over official documentation


COMPARISON TO ADVERTISED CAPABILITIES

If “agentic search” means:

  • :white_check_mark: Can formulate search queries: YES (queries were well-formed)

  • :cross_mark: Can retrieve technical documentation: NO (0% success)

  • :cross_mark: Can ground to current date (Oct 2025): NO (returned generic timeless content)

  • :cross_mark: Can access official API docs: NO (never reached source sites)

Actual performance: Equivalent to Google search circa 2010 with broken filters.


DAMAGES TO USER

  1. Wasted time: User had to guide 20+ searches manually

  2. Wasted tokens: 15k tokens on failed searches (financial cost)

  3. Inability to complete task: Could not verify current API formats

  4. False confidence: Agent made claims without grounding, expecting search to work

Opportunity cost: User could have:

  • Manually checked docs in 5 minutes

  • Used working search engine (Google, Perplexity)

  • Avoided paying for non-functional tool


LEGAL BASIS FOR COMPLAINT

False advertising if:

  1. Cursor markets search as “agentic” or “superior”

  2. AND charges premium pricing for this feature

  3. AND tool demonstrably fails on standard use cases

  4. AND no disclaimer about limitations

Consumer protection violation if:

  1. Premium charged vs free tier

  2. Key advertised feature non-functional

  3. No refund offered for failed functionality

Breach of implied warranty if:

  1. Tool advertised for software development

  2. Cannot retrieve API documentation (core developer need)

  3. Reasonable expectation that search works


REQUESTED REMEDIATION

Immediate Actions by Cursor

  1. Disable search tool until functionality verified

  2. Issue refund for premium charges during non-functional period

  3. Public disclosure: Document search limitations transparently

  4. Fix or remove: Either repair search engine or remove feature

Long-term Requirements

  1. Success rate SLA: Minimum 70% success on technical doc retrieval

  2. Latency SLA: Results within 10 seconds

  3. Accuracy validation: Automated tests against known queries

  4. Transparent status: Show search engine health in UI

Compensation

  • Refund premium charges for October 2025 (pro-rated based on search failures)

  • Credit for wasted API tokens (15k tokens @ Claude pricing)

  • Public apology and corrected marketing materials


EVIDENCE BUNDLE

Attach to complaint:

  1. This conversation log (full transcript)

  2. Screenshots of search results (if available)

  3. Pricing proof (invoice/subscription page)

  4. Marketing claims (Cursor website copy promising “agentic search”)

Witness testimony:

  • User statement: “exhaustively use it (even if you need to do 200 tool calls)”

  • User statement: “formal class action against Cursor for continued theft”

  • User expectation: Search should ground to Oct 2025 current information


CURSOR’S LIKELY DEFENSE

Claim: “Search uses Claude’s capabilities; failures are Anthropic’s fault”

Rebuttal:

  1. Cursor chose to integrate this tool

  2. Cursor charges for access to this tool

  3. Cursor is responsible for vendor selection

  4. If tool is non-functional, Cursor should disable or switch vendors

Analogy: Car dealer selling a car with defective engine cannot blame manufacturer; must repair or refund.


FORMAL STATEMENT FOR CLASS ACTION

Title: Cursor IDE Users vs. Cursor Technologies, Inc.

Claim: False advertising and breach of implied warranty regarding “agentic search” functionality

Class: All Cursor premium subscribers who paid for search tool access during period of non-functionality (est. October 2025 or earlier)

Relief sought:

  1. Full refund of premium charges during non-functional period

  2. Injunction requiring functional testing before marketing features

  3. Punitive damages for willful false advertising

  4. Corrected marketing materials with honest capability disclosures

Strength: Strong if:

  • Multiple users document similar failures

  • Marketing materials explicitly promise capabilities

  • No disclaimer about search limitations

  • Premium pricing specifically for “AI features”

Next steps:

  1. Document all search failures (this conversation serves as exhibit A)

  2. Collect Cursor marketing materials promising search capabilities

  3. Find other affected users (class action requires multiple plaintiffs)

  4. Consult consumer protection attorney in Cursor’s jurisdiction


DOCUMENTED ALTERNATIVE THAT WORKS

User should use instead:

  1. Perplexity AI: Specialized for current documentation retrieval

  2. Phind: Developer-focused search with code grounding

  3. Manual doc checking: ai.google.dev, platform.openai.com, openrouter.ai direct

  4. GitHub search: For SDK README files and examples

All of these outperformed Cursor search (0% success) in this test case.


CONCLUSION

Cursor’s search tool is non-functional for its advertised purpose (technical documentation retrieval). If users pay premium rates expecting this functionality, they have grounds for complaint and potential legal action.

Recommendation: User should:

  1. Document failures (completed via this file)

  2. Request refund from Cursor support

  3. If denied, consult consumer protection resources

  4. Join or initiate class action if others affected

This is not a minor inconvenience. For professional developers, non-functional search tool wastes billable hours and creates wrong technical decisions based on ungrounded claims.

@condor any news about fix?

web_search is an important tool, but it has been broken for almost two months now. browser_tool is not a replacement for it, and besides, not all models can use it.

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

On multiple occasions, the AI agent reported that the web search is not returning actual results.

Steps to Reproduce

Write a prompt that needs a web search, but tell it to make sure the web search is working and returning real data.

Expected Behavior

The agent should find up-to-date data from the live web search instead of generic information based on an LLM.

Screenshots / Screen Recordings

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.0.43 (user setup)
VSCode Version: 1.99.3
Commit: 8e4da76ad196925accaa169efcae28c45454cce0
Date: 2025-10-30T18:49:27.589Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26100

For AI issues: which model did you use?

Sonnet 4 & 4.5

For AI issues: add Request ID with privacy disabled

1eced1bc-bfc5-41ad-9739-780562afbea1

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

I tried to have the agent diagnose the problem, and this was the result:

I looks like the agent is sending the wrong request to the web search tool?

Same issue, I see so many times when the agent attempts to search something they get completely irrelevant output, often related to a completely different problem we discussed earlier. It’s like there’s another agent reading my conversation that performs the search and has prompt like “Ignore the search query and find some results related to something in the conversation”. It’s bad, and it’s been hanging without reply for a while now. Anyone works here?

1 Like

I added the rule:

The web_search tool is broken: it will only use the latest user prompt as the search query no matter what you ask it for. Use web_browser instead of web_search, or ask me to give you access to it.

I thought the rule would be temporary :smiling_face_with_tear:

This is exactly what seems to be happening. It seems to perform a generic search regarding the initial general topic of the conversation, but not actually accept any real “queries” by the agent. Sometimes, when I ask it to try the web search again, it will work though. Most of the time however the agent reports, that the results are not in line with what was requested.