Composer Functionality Seems to Be Bogged Down by a Confluence of Issues Right Now


버전: 0.45.11
커밋: 1.96.2
날짜: f5f18731406b73244e0558ee7716d77c8096d150
Electron: 2025-02-07T09:43:58.555Z
ElectronBuildId: 32.2.6
Chromium: undefined
Node.js: 128.0.6613.186
V8: 20.18.1
OS: 12.8.374.38-electron.0

Introduction

First, a bit of background: I’m Korean (from the good Korea, not the one run by that… individual up north). I’m using a translator for this, so the meaning might not be perfectly clear, but I had my AI buddy double-check it, so hopefully it gets across.

The Situation

Since updating to version 11, I’ve noticed a gradual slowdown starting around 7 PM KST on Saturday. There’s a running joke in Korea that Cursor slows down when the “folks in India get busy.” So, naturally, I Googled Indian time, exclaimed, “Aha!”, and was perfectly willing to accept the increasing “slow requests.” The slowdown wasn’t even that bad at first – just a 20-40 second wait. (and I love you, Indian people. your knowledge is a grace.)

But then something changed.

‘Connection failed’ – a search for this term turns up results from around 3 months ago – a clear sign that something is going wrong.

When I first checked the forum, it was quiet. So, like any reasonable person, I figured it was an issue on my end and waited to see what happened. As the situation worsened, I decided to test it myself. (Wow, waiting 4-5 minutes for a request is a new experience for me!) I kept monitoring the forum, and it seems like the issues that are popping up are pretty unique.

Here’s what I’ve gathered from the forum:

  1. Slow response times.
  2. Claude Sonnet is really slow.
  3. Heavy server load / overload issues.
  4. Mac users are complaining a lot.

Yep, I’m an Apple slave. Phone, computer, even my YouTube-watching device is an iPad. So, my problems align closely with those of other Mac users.

However, users on other OSes are also reporting issues, but… different ones. They seem tangentially related but with key differences.

Here’s my theory. Now, I’m just a beginner, so I could be completely wrong. But I’m hoping this might shed some light on what’s happening with Cursor. Think of it as a classic case of Korean nosiness, where a clueless idiot might accidentally stumble upon the answer.

Possible Cause of Overload

The problem seems to be tied to using “Claude Sonnet 3.5.”

Cursor appears to be using a queuing system for “slow requests.” You wait your turn, and when it’s your turn, the request is sent. I’ve seen the same behavior with ChatGPT models.

Now, here’s the issue: When using Sonnet, after waiting in the queue and sending the request, no response comes back. You can wait forever. My guess is that the context is sent to Claude, but the output never makes it back.

It consumes the input tokens, but never generates output tokens. The thing is, this issue is intermittent. Something seems to be broken in the logic. (This is what I think is causing the confusion. People keep sending requests, some get responses, some don’t, it all accumulates, and the number of requests keeps increasing, causing an overload.) This problem goes away when I use other models. ChatGPT’s 4o or o3 mini work fine. The issue is specific to Claude.

I’ve seen forum admins suggest things like, “Are you using a VPN?” Nope. I’m practically a shut-in who spends 60+ hours indoors before venturing outside. My IP doesn’t change, and I’m not sophisticated enough to use a VPN to do anything nefarious with LLMs.

스크린샷 2025-02-11 오전 7.51.11

In short, the input goes through, but the output doesn’t reach me. As shown in the picture, the […] loading indicator spins infinitely without any actual tokens being streamed, signaling “I’m about to respond!” So, the input goes in, but the output stalls.

I think this is the core of the confusion. The […] indicator issue is unique to me, which may be why others are missing it in their descriptions. That would imply it is ‘uncommon.’ Uncommon issues are also easily overlooked when a typical user describes a problem.

Project Rules

Another issue that came up with the latest update (and part of why I started using Cursor): Cursor rules, especially project rules, are acting strangely. There’s chatter about Sonnet being involved. This is also confusing developers. Why does ChatGPT respond while ignoring project rules? Mystery.

The Situation is Worsening

This deadlock seems to be affecting all users. You wait for “slow request,” cook a whole pack of ramen, then when it’s finally your turn, the […] loading indicator mocks you. Now you wait another 5 minutes. After finishing the ramen and doing the dishes, “Connection Failed” greets you. A complete waste of time. I’m so frustrated, I’m about to grab a gun and go after Kim Jong-un.

This update was clearly amazing. But the unreasonable process and signal chain make it difficult for users to troubleshoot. If a downgrade was possible, the core problem would be identified faster.

Cursor, please stay awesome. Claude is the best. I don’t want to use ChatGPT, that arrogant piece of trash. When I use an agent model to ask ChatGPT about the problem, it keeps insisting it’s right. It can’t even fix the issue itself. Then, when Claude is randomly applied, it works like a charm, creating a perfect contradiction. Now, I’m going to take the changed project and post it in the ChatGPT composer chat, saying, “See, you were wrong, Claude was right.” Yes, it’s misuse. But I need to argue. I’ve wasted over three days, so I hope you understand.

Stay warm and have a good day.

2 Likes

@PMusic wow, thats an bug report thats fun to read.

Welcome to the forum and thank you for being so detailed and not jumping to conclusions. Your report is very easy to understand, congrats on successful communication through AI. Very sorry to hear about your frustration, we have all been in a similar situation.

Yes you are right that Claude is sometimes overloaded.
Today i had even on fast requests (first 500/month) two times an error.
There were from my point of view two causes:

  • Many people use Claude because it is very good. This does sometimes cause slower responses.
  • Yes it may sometimes also time out, although i think that Cursor team is improving the error messages shown so we can understand what the reason is.
  • It is true that regional heavy usage may slow down AI requests more than for other people. Not necessarily due to Cursor servers.

Usually when there is an issue like this you would see an error message with an Request ID that you could post here for Cursor team to review.

Check out https://docs.cursor.com/troubleshooting/request-reporting

Have also a look at your Cursor Editor details. it shows very strange details that are not matching with actual details. maybe because of AI translation?

Version: 0.45.11
VSCode Version: 1.96.2
Commit: f5f18731406b73244e0558ee7716d77c8096d150
Date: 2025-02-07T09:43:58.555Z
Electron: 32.2.6
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.3.0

Please let us know if the issue could be also from a long Chat/Composer session. When the context gets too long the AI has hard time making answers. The same happens when large files (too many lines) are attached as it increases the context size and takes longer to process.

Some features like Composer are more powerfull than the Chat, but it takes a lot of processing for each sub-step of Composer as different parts are handled by different LLMs.

Thanks again for the great bug report. Hope you can provide more details and someone has a good advice for you.

1 Like

Not sure if this is related, but Composer isn’t able to do anything correctly. It keeps saying it made changes to a file, but didn’t do any of it. Then it says it will actually do it this time, and again it does nothing. It’s really frustrating. It was working perfectly just a few hours ago. This is not what I’m paying for.

1 Like

This thing is also happening to me but only with the o3-mini and gemini-pro models, which are relatively newer so I’m guessing they haven’t been integrated well enough.

Are you facing this issue with other models too?

1 Like

While waiting for your response, I reinstalled Cursor. If providing troubleshooting information helps contribute to the development of this great tool, then I’m glad to do it!

I’ve also been thinking deeply about context length. When I think back, the Role (global) option was initially very short (e.g., “You answer in Korean”). Then, I learned about Role Prompts in .cursorrules online, and that’s when it started to get longer. That might be the reason. I hope testing helps improve things.

I’ll continue testing while controlling large files and overly long chats as much as possible. However, I’m concerned because I remember the issue occurring even before the reinstall, even in new Composer chats that didn’t contain anything. I hope things work out.

I hope you have a great day. It’s nice to be able to have a conversation with people from far away in this global world!

That’s interesting. I was getting responses fine with the o3-mini model. Could it be that the model is affected differently for each user? Or maybe there’s a conflict with the AI rules settings? I find this problem quite peculiar. I hope this issue gets resolved for you as well.

(OH, IT IS NOT MY REPLY FOR. Sorry.) 머쓱타드…

I’ve had a similar experience, especially with ChatGPT. Whenever that happens, submitting a prompt (chat) along with my request like, ‘You have the right to correct. Act as a collaborator for Cursor,’ sometimes makes it work. Though, I haven’t had this happen with Sonnet.

Yeah the responses are fine but the issue is that it sometimes says that it has applied the changes to files but it doesn’t, and it has to be prompted again to apply those changes, which consumes even more tokens and eats up all the fast requests very soon :frowning:

1 Like

Yes there is a difference between models. Also not sure which language you use to write prompts. Not every model handles non-english languages the same. While I think in many languages i do use usually english for prompting to avoid issues with different writing system or language.

Too long cursor rules can cause issues. Sometimes it confuses the AI when it gets too many rules to follow. Thats why Cursor added the new ./cursor/rules .mdc files. Those would be loaded by the description you enter in the cursor settings and by file filter, so only limited number of rules applies.

Some models are not yet well integrated like o3-mini, Cursor team needs a bit more time to adjust their process to work well with those models.

I think people here are eager to help others and to share experience to get better at using Cursor and coding with AI. Feel free to share what you feel comfortable with.

Its sure good to try a simple approach again, perhaps check which models work well with simple prompts and build up to more detailed prompts and rules from that.

Some users had issues with other extensions loaded into Cursor. But without further information it is hard to tell what could cause it. You can always email Cursor Support, and if you have a Request ID they may be able to see if something hangs on their side. Sometimes it happens.

Im in same region and had no issues on same device and cursor version using Claude during last days.

Did you by any chance run out of 500 fast requests? Once you cross over 500 fast requests you will be with Claude in slow queue, plus the more requests you make in slow queue it can slow you more down. Try a bit other models and compare.

Wish you a nice afternoon.

1 Like

mine started to work better all of a sudden. It made some big changes, noticed the linting errors, then fixed them, asked me to start a script from the terminal, then read the output and made more changes I and fixed some other linting errors and kept repeating this cycle for 5-10 minutes until it was completely done. By the end it only cost me 1 fast credit for the whole time it re-iterated and tested everything until it was finished.

I’m used to going back and forth but I think having it write a test script to check the output enabled it to verify its own work and continue on its own. The script was a batch file to start the server, send some data, get a response and check if it matched the required response. I’ll definitely be doing this more often as it saved me an hour.

2 Likes