I can no longer “@web” to tell the agent to search the web. I need to type out “search the web for…” and the agent still sometimes is lazy and doesn’t search.
However the largest problem is that when the agent does search the web, the response given to the agent is useless. See below. There’s some kind of intermediate agent that summarizes the search results. This makes it impossible to get the latest documentation or search for new libraries and is a huge downgrade.
Please fix this immediately. If it’s not fixed I will be forced to switch to Claude Code or another coding tool.
I don’t understand how so few people are reporting this because this breaks EVERYTHING.
Any time Cursor does a web search, it returns completely unrelated data. The agent itself will often continue with “well, the web search was useless, so I’ll try to proceed without it.”
Eg, here I’m doing some research for prediction markets, where I have a “daily routine” of actions to do. The agent requests what it has to for a web search, and it returns… freaking weather forecast.
Same issue. Web search has in fact been completely absent ever since I specifically ask the agent to verify if it receives actual data from the search. It seems to me, that the feature doesn’t actually exist in cursor. As if it is faking it. Could it be, that it is just a sham?? I have seen many people report the same issue.
This is from just now, but it has been the same for the entire month, since I started having the agent check. I have also already reported it and I have seen many people having the same issue, but zero reply from cursor. I don’t think they care tbh.
a9b915e1-df4e-428a-b0e0-c9e89360feea
I asked the agent to find reddit threads discussing the differences between the different deep research api providers (perplexity, gemini w/ google search grounding, etc.). I would expect to get back results like anyofthese which the agent could then summarize. The agent made 15 search attempts, all of which failed.
dfd881a8-aa05-4ebc-ac91-a673cb784b62
I asked the agent to find all of the Cursor office locations. I would expect it to tell me San Francisco, California, and Manhattan, New York. At first it found many different companies named Cursor (understandable), the agent then tried to narrow down the search by adding the Anysphere keyword and searched for “Anysphere cursor ai headquarters office location” specifically, however the search results were pretty much identical to the previous query even though the search query text changed.
f1fc896b-1130-4e89-b757-e9d799ad695d
I asked it to search for the weather in two different locations. It made two separate searches for “San Fransisco California weather forecast” and “San Antonio Texas weather forecast” however the ai summary for each search contained the forecast for BOTH locations. I would expect each search to return the results from just the location/keywords it searched for.
I can’t share request IDs because of my companies’ policies, but @BigPartyMolasses’s examples are a daily occurrence for me, and have been since the 2.0 release. Based on the LLM outputs it seems the user’s original prompt gets injected into the web search tool, which confuses that tool significantly.
Hello, this is a core feature necessary for Cursor to operate. What is taking so long with fixing this? it’s clearly a critical incident. Why hasn’t it been elevated? The service is broken since 5th of November in this thread and you are charging customers?!
Andrew is already in this thread and notified. Note that we require Bug Reports to be filed in Bug Reports but in this case I believe we have that already, so this is a duplicate.
I don’t think this is a “fit into next sprint” kind of situation. This is a in the middle of the night emergency patch kind of situation. The fact that it has been going on this long is wild.