We have a service that exposes several MCP tools and is integrated with Cursor. Occasionally, when a tool call takes longer than usual (over 30 seconds), Cursor fails to handle the response properly. Although the underlying tool successfully returns a response, Cursor does not reflect this — the tool call never receives a checkmark (green tick).
This leads to a couple of issues:
It creates the impression that the tool call is stuck, even though it has completed.
Since Cursor appears unresponsive, users often cancel the request and retry, only to encounter the same issue again.
See the attached screenshot (redacted unimportant bits) which shows tool call successful but Cursor is not handling the response.
We haven’t found anything in Cursor’s MCP logs that explains this behavior. Nothing in Developer Tools either after the tool call was made. What is the expected handling for long-running tool calls? Is there a timeout configured in Cursor for tool executions? How do users knows if the tool call is truly stuck or its a Cursor/client issue?
Any help or insights would be appreciated. Thanks!
For example, let’s say we have long running tests and we want to create an MCP server to let Cursor’s agent run tests (which is a very valid and useful UC totally possible in Claude Code), well now it’s either impossible in Cursor or I have to do something hacky like have the test run in the bg and ask Cursor to sleep for some duration and check again but this solution is bad because the agent will end up skipping the test and just making other changes or do other things and move on.
Thanks @Jonathan_Todd our team is looking at increasing the MCP timeouts, but there will always be a limitation of how long we can postpone a response to AI since it blocks an AI call.
I recommend to run tests in parallel where possible, many test runners support parallel execution. Additionally caching or running only tests on dirty files (files which were changed) is recommended.
That’s not sufficient when the test is a very in-depth integration test that needs to complete a full investigation that might take 5 or 10 minutes.
Blocks an AI call? You can’t close the connection and re-send the call when the MCP action is done?
Let’s say you cant for some reason. And I’ve developed MCP actions that run long running tasks in the background (and I have)…
The alternative is Cursor needs to expose an API feature that I can use in my MCP action to force the agent to stop and wait until prompted again, or even more UX friendly - until the API is called to resume it if it hasn’t been resumed by the user. Because while I use cursor rules to simulate this, it’s not reliable and this is a case where you may want the agent to reliably stop and wait for the MCP action to complete before assessing the outcome.
I know you guys arent willing to allow programmatic resumption of the agent to avoid your API costs rising, but that’s one of the things Claude Code beats you on by a lot, and there could be some middle ground where API based resume only works one time until a user interacts to continue.
@andrewh Not sure if it was a good idea to disable timeout altogether. Anyhow, what release would have the “disabled timeout” enhancement? I’d like to keep an eye out.