Multiple Issues in Cursor 0.50.0: Response Delays & Large File Editing Problems

Description

After updating to Cursor 0.50.0 (released May 9, 2025), I’ve encountered several issues that significantly impact usability:

  1. Consistent Claude Model Response Delay: Claude models take between 1:18 and 2:00 minutes to start responding, as shown in my timing tests below. This appears to be a systematic issue since Gemini and GPT models respond much faster (0-15 seconds), and all models used to respond in under 25 seconds before the 0.49.x update.

  2. Gemini 2.5 Pro Quality Regression: The latest Gemini model update demonstrates reduced performance compared to the previous version. While still labeled as “gemini-2.5-pro-exp-03-25” in the UI, it’s being routed to the newest version (05-06) under the hood, as confirmed by Cursor developers. It often suggests manual CMD operations rather than executing tasks directly and struggles with following instructions accurately.

  3. New Search & Replace Tool for Large Files Issues: While I appreciate the new feature for large files mentioned in the changelog, it has implementation problems:

    • Inserts code edits as single lines instead of properly formatted multiple lines
    • Creates syntactically invalid code that breaks functionality
    • Generates multiple unnecessary temporary files during editing

How to Reproduce

Claude Delay Issue:

  1. When in “Slow Requests” mode (automatically applied after 500 premium requests per month)
  2. Use Claude 3.7 Sonnet, Claude 3.7 Sonnet Thinking, Claude 3.5 Sonnet, or Claude 3.5 Sonnet 2024-10-22
  3. Ask any question or request
  4. Time the response - it will consistently take between 1:18 and 2:00 minutes to begin responding

Gemini Quality Issue:

  1. Use “gemini-2.5-pro-exp-03-25” (which routes to the newest version) in either normal or Slow Requests mode
  2. Request code operations or file edits
  3. Observe how frequently it suggests manual CMD operations instead of making direct changes

Large File Editing Issue:

  1. Open a JavaScript file with >5,000 lines
  2. Use any Claude model to modify the file (as noted in the changelog, only Anthropic models currently use the new search & replace tool)
  3. Observe how edits break formatting and create single-line code blocks

Model Response Time Comparison

I conducted multiple timing tests with each model in “Slow Request” mode, using a stopwatch that I started immediately after the “Slow request, switch to Auto for a much faster response, or get fast access here…” message appeared. Results from slowest to fastest:

  • Claude 3.7 Sonnet Thinking = 2:00 (2 minutes)
  • Claude 3.7 Sonnet = 2:00 (2 minutes)
  • Claude 3.5 Sonnet = 1:20 (1 minute 20 seconds)
  • Claude 3.5 Sonnet 2024-10-22 = 1:18 (1 minute 18 seconds)
  • Gemini 2.5 Pro Exp 03-25 = 0:15 (15 seconds)
  • GPT 4o = 0:05 (5 seconds)
  • Gemini 2.5 Flash Preview 04-17 = 0:00 (instant)
  • GPT 4.1 = 0:00 (instant)

These times clearly demonstrate a significant disparity between Claude models and other LLMs. The fact that all Claude models have such precise, extended delays strongly suggests this is not a technical limitation but an intentional throttling.

System Information

Version: 0.50.0 (user setup)
VSCode Version: 1.96.2
Commit: bbfa51c1211255cbbde8b558e014a593f44051f0
Date: 2025-05-09T20:59:08.043Z
Electron: 34.3.4
Chromium: 132.0.6834.210
Node.js: 20.18.3
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26100

Impact

These issues significantly reduce productivity. The Claude delay extends what should be quick interactions to over 1-2 minutes, while the Gemini quality regression limits its usefulness despite faster response times. The large file editing tool’s formatting problems (which according to the changelog is currently only available with Anthropic models) require additional time to fix manually, making the Claude models simultaneously the slowest and the only ones with the ability to edit large files.

The severity of these issues has led some users to cancel their subscriptions and move to alternative AI coding assistants, as evidenced in numerous forum posts and Reddit threads.

Similar Reports

Unfortunately, due to forum limitations for new users (only 2 links allowed per post), I cannot share all the evidence I’ve collected. However, these issues are widespread across the community with dozens of reports on:

  1. The official Cursor forum (threads discussing slow response times, Gemini quality issues, and file editing problems)
  2. Reddit’s r/cursor community (multiple threads from the past week about Claude delays, Gemini regression, and edit failures)

I can share specific thread links in the comments if requested, or provide them via email to the Cursor team.

Feature Relevance

These issues directly relate to features highlighted in the 0.50.0 changelog, particularly the “Fast edits for long files with Agent” feature which states: “We’re rolling this out to Anthropic models first and will expand to other models soon.”

I’m hoping for an official response to these issues that are affecting so many users. Many user reports have gone without acknowledgment from the Cursor team, which adds to user frustration. A transparent explanation of what’s happening with the Claude response delays and a timeline for fixing the file editing problems would go a long way toward restoring confidence in the product.

I’m happy to provide screenshots, screen recordings, or any additional details to help troubleshoot these issues. Looking forward to seeing these addressed in an upcoming update.

4 Likes

Here’s the supporting evidence for the issues mentioned in my post:

Cursor Forum Evidence:

Reddit Evidence:

Response Time Issues:

Gemini Quality Issues:

File Editing Issues:

General Issues:

Twitter Evidence:

This represents just a fraction of the reports I’ve found, but clearly demonstrates these are widespread issues affecting many users.

3 Likes

@danperks you need read this post. I used to praise Cursor and protest the articles disparaging cursor on the forum. But now I feel sad because I no longer receive the quality of feedback like the old days. 2 years ago, when I used Claude 3.5, each of my slow request was almost no more than 30 seconds