버전: 0.45.11
커밋: 1.96.2
날짜: f5f18731406b73244e0558ee7716d77c8096d150
Electron: 2025-02-07T09:43:58.555Z
ElectronBuildId: 32.2.6
Chromium: undefined
Node.js: 128.0.6613.186
V8: 20.18.1
OS: 12.8.374.38-electron.0
Introduction
First, a bit of background: I’m Korean (from the good Korea, not the one run by that… individual up north). I’m using a translator for this, so the meaning might not be perfectly clear, but I had my AI buddy double-check it, so hopefully it gets across.
The Situation
Since updating to version 11, I’ve noticed a gradual slowdown starting around 7 PM KST on Saturday. There’s a running joke in Korea that Cursor slows down when the “folks in India get busy.” So, naturally, I Googled Indian time, exclaimed, “Aha!”, and was perfectly willing to accept the increasing “slow requests.” The slowdown wasn’t even that bad at first – just a 20-40 second wait. (and I love you, Indian people. your knowledge is a grace.)
But then something changed.
‘Connection failed’ – a search for this term turns up results from around 3 months ago – a clear sign that something is going wrong.
When I first checked the forum, it was quiet. So, like any reasonable person, I figured it was an issue on my end and waited to see what happened. As the situation worsened, I decided to test it myself. (Wow, waiting 4-5 minutes for a request is a new experience for me!) I kept monitoring the forum, and it seems like the issues that are popping up are pretty unique.
Here’s what I’ve gathered from the forum:
- Slow response times.
- Claude Sonnet is really slow.
- Heavy server load / overload issues.
- Mac users are complaining a lot.
Yep, I’m an Apple slave. Phone, computer, even my YouTube-watching device is an iPad. So, my problems align closely with those of other Mac users.
However, users on other OSes are also reporting issues, but… different ones. They seem tangentially related but with key differences.
Here’s my theory. Now, I’m just a beginner, so I could be completely wrong. But I’m hoping this might shed some light on what’s happening with Cursor. Think of it as a classic case of Korean nosiness, where a clueless idiot might accidentally stumble upon the answer.
Possible Cause of Overload
The problem seems to be tied to using “Claude Sonnet 3.5.”
Cursor appears to be using a queuing system for “slow requests.” You wait your turn, and when it’s your turn, the request is sent. I’ve seen the same behavior with ChatGPT models.
Now, here’s the issue: When using Sonnet, after waiting in the queue and sending the request, no response comes back. You can wait forever. My guess is that the context is sent to Claude, but the output never makes it back.
It consumes the input tokens, but never generates output tokens. The thing is, this issue is intermittent. Something seems to be broken in the logic. (This is what I think is causing the confusion. People keep sending requests, some get responses, some don’t, it all accumulates, and the number of requests keeps increasing, causing an overload.) This problem goes away when I use other models. ChatGPT’s 4o or o3 mini work fine. The issue is specific to Claude.
I’ve seen forum admins suggest things like, “Are you using a VPN?” Nope. I’m practically a shut-in who spends 60+ hours indoors before venturing outside. My IP doesn’t change, and I’m not sophisticated enough to use a VPN to do anything nefarious with LLMs.
In short, the input goes through, but the output doesn’t reach me. As shown in the picture, the […] loading indicator spins infinitely without any actual tokens being streamed, signaling “I’m about to respond!” So, the input goes in, but the output stalls.
I think this is the core of the confusion. The […] indicator issue is unique to me, which may be why others are missing it in their descriptions. That would imply it is ‘uncommon.’ Uncommon issues are also easily overlooked when a typical user describes a problem.
Project Rules
Another issue that came up with the latest update (and part of why I started using Cursor): Cursor rules, especially project rules, are acting strangely. There’s chatter about Sonnet being involved. This is also confusing developers. Why does ChatGPT respond while ignoring project rules? Mystery.
The Situation is Worsening
This deadlock seems to be affecting all users. You wait for “slow request,” cook a whole pack of ramen, then when it’s finally your turn, the […] loading indicator mocks you. Now you wait another 5 minutes. After finishing the ramen and doing the dishes, “Connection Failed” greets you. A complete waste of time. I’m so frustrated, I’m about to grab a gun and go after Kim Jong-un.
This update was clearly amazing. But the unreasonable process and signal chain make it difficult for users to troubleshoot. If a downgrade was possible, the core problem would be identified faster.
Cursor, please stay awesome. Claude is the best. I don’t want to use ChatGPT, that arrogant piece of trash. When I use an agent model to ask ChatGPT about the problem, it keeps insisting it’s right. It can’t even fix the issue itself. Then, when Claude is randomly applied, it works like a charm, creating a perfect contradiction. Now, I’m going to take the changed project and post it in the ChatGPT composer chat, saying, “See, you were wrong, Claude was right.” Yes, it’s misuse. But I need to argue. I’ve wasted over three days, so I hope you understand.
Stay warm and have a good day.