Slow Pool Information

I understand what you mean. I am not completely opposed to trials. What I mean is that some users never think about paying. They will only try every means to keep using on trial. This is a kind of harm to paying users.

1 Like

so why are you still letting tons of new people sign up and take their money when you know you can’t even handle the users you’ve already got?

3 Likes

:joy:

This is a paradox.


  • More new users, more paying.
  • More new users, more cost.

  • More paying, more speed up.
  • More user, more slowly.
1 Like

If what you are saying is true, then GPT 4o wouldn’t be taking 3-5 minutes per chat message either.

Cursor was too good to be true anyway. $20 a month, for infinite access to all the models? Anthropic and OpenAI must have caught on, and are purposely lowering request priority from cursor before they announce their own coding competitor to Cursor.


For the first month of my subscription, I paid
20.
𝑇

𝑖
𝑠
𝑚
𝑜
𝑛
𝑡

,
𝑎
𝑛
𝑎
𝑑
𝑑
𝑖
𝑡
𝑖
𝑜
𝑛
𝑎
𝑙
20.Thismonth,anadditional20 was deducted, which might be due to my oversight in not checking the backend settings. Currently, all requests are slow, with each request taking several minutes to process. I am unsure when this issue will be resolved. Had I known it would be like this from the start, I certainly would not have subscribed. :joy:

1 Like

Cursor has become very slow, and it’s unacceptable for it to be this slow despite having a subscription.

4 Likes

lmk when you fix this issue, it’s not only anthropic but even when we pick chatgpt its still so slow, so ye not renewing my sub today, will be back when you fix the issue, but i think its time to remove free tier trials if thats gonna fix the issue anyway

Trial barely use requests since they fixed abusers and limited it to 150. Its yolo and agent that drains sonnet resources

On the one hand, they want to attract more new users through trials.

On the other hand, the influx of new users will slow down the entire service.

:sweat_smile:


???

One thing I noticed; even though they say the problem is related to Anthropic, OAI is also pretty slow in slow mode. Another thing I realized is that the requests in slow mode are all coming with a wait time that looks pretty similar to me (45-55s) regardless of the time of day. It seems like it’s not about demand anymore, but rather a predetermined range of wait time?

3 Likes

I’ve been using haiku, trying to avoid the slow queue as much as I can (half way thru my 500 monthly requests). I think I probably use 1k every month. I can’t wait 5 minutes per response after using up my 500…
If OpenAI requests also have this “no real queue” (not a demand issue) waits then I will likely not be renewing. :confused:

At this point another pricing tier needs to be introduced for Claude usage since the demand is so high. I’d rather pay more for exclusive access, priority queue if it means not being in the same pool with everyone hammering the sonnet 3.5 api. Cursor is becoming unusable at times because of capacity issues

Are you aware you can increase your fast requests in your settings for an extra $20 per month?

1 Like

Hey, we recently introduced usage-based pricing for these models. It costs 4 cents per request, which is equivalent to $20 for 500 requests. But you don’t have to buy 500 requests that you might not use up by the end of the renewal period, so you don’t lose anything in this case.

4 Likes

Another plan would be nice :slight_smile:

I’ve already used my $40 and about to hit my next $40 usage based as well and my plan does not reset until the 8th of each month.

Which means I am using roughly 2000 requests every 10 days as someone who is working on 4 to 5 different projects per month.

Would be cool to have like a 10,000 requests plan for like $300 per month or whatever math makes sense to users who use a lot and get a slight discount for paying up front.

4 Likes

Increased a bit? 10 seconds to 3 minutes is x18.
Is 18 times slower a bit?
If the issue is Anthropic, then why does ChatGPT also take 3 minutes to respond?

First month (December), I paid $20 and I used it without any extra costs, without waiting 3 to 5 minutes to get a response, even after using my 500 free requests. It was working like how you advertised it!

This month (November), you charged me $20 for Pro, and after that, you charged me another $20, which I thought was the Pro fee that I hadn’t paid yet! And again, you tried to charge me another $20, which I denied!

So basically, last month I paid $20 and everything was fine. This month I paid double ($40), and I have to wait 3 minutes for each response, which makes it completely unusable!

I tried all other models and they all have their own issues, the one that is a bit workable is the exp flash which has the issue of regeneration the whole code for editing 2 lines and still it will fail 8 times of 10 and the other models are completely useless unless you want to make a todo and deploy, also other models are not agent supported which makes it worse!

I emailed customer support and asked about this, and they acted like it was like this from the first month I used it! They told me things like:

In December, you likely didn’t use all your fast requests as quickly, which is why you didn’t see additional charges. In January, you used your initial 500 fast requests and purchased an additional package ($20 on Jan 15).

This is completely wrong. I used my 500 requests and I did not get charged for any extra cost, and I didn’t have to wait 3 minutes for a response! Also I didn’t purchase a additional package you charged me and I thought thats the unpaid fee for Novamber Pro subscription! I told them this, and they replied with:

Thanks for providing those details. I’m connecting you with a teammate who can better investigate these unusual slow request times. They’ll look into why you’re experiencing such long delays and get back to you as soon as possible.

And I heard nothing else from them!

So basically, customer support is acting like they don’t know anything about what’s going on! Then I come here and I find this, which makes me think: Why don’t you have an official announcement on the website about this? Why not clearly state that you’ve implemented a pay-per-500-requests model and that the service becomes unusable after the initial 500 requests unless you pay again?

That would be much more honest than this sneaky way of doing business.

11 Likes

Just came to weight in on my own degraded experience. The recent issues with the slow pool are awful, and you really will have to address this sooner than later. Not the best reputation out there as it is, and this recent issue has most devs ready to flock to the next best thing. Seems myself and many others are holding on a bit longer to see if you fix it, but don’t be surprised at mass cancellations if you don’t.

3 Likes

I feel the same. I’ve been a paid customer since Cursor.so came out, and I’m in the first few months of the second annual subscription, having a good faith in the product.

The recent wait time in the slow pool is unbearable, not to mention the seemingly degraded quality of the response. Learning that one needs several prompt to get a small task done and having to wait 3 minutes each prompt, I quickly learned the helplessness.

Cursor really needs to fix this, or I’m done. I’m trying out VS Code Copilot Pro now.

I don’t know much about the slow pool queuing method adopted from other platforms (midjourney?). But Cursor needs to know that the main usage of cursor is to streamline a coding workflow. It is not like AI-based image generation where users won’t mind the wait and can check back later. A three minute wait in the Cursor slow pool breaks the coding flow. One will have an idea to try in that three minutes, and when the results are back, it could be less useful.

Anyway, waiting for image generation is like having out everything in the oven and waiting for it to cook. No further steps depend on it. Waiting for code generation is like waiting between chops.

There could be better solutions WITHOUT significant traffic increase. For example, Cursor can count that three minutes as a freeze time (like the cool down time of a game skill), starting from the last time a response is received. Once the freeze time is over, the response should be instant. The freeze time will encourage the user to prompt deliberately.

4 Likes

I just opened this account to talk about this problem. I’m using the app for two months. I just used chat tab. It’s a very useful app compared to its competitors. There was a no problem until this week despite I spent my all fast request quickly. Suddenly, all requests started waiting in long queues. Ok, there could be too many demands on the app but the problem not solved for the week. There is no proper explanation. They push users to buy more credits. I also checked the duration passed in queue, it is 3-3.5 minutes and not changing. They said there was capacity problem for Claude, but what about GPTs, they too? My theory, they made a huge mistake in the resource consumption of the models. It cost too much and now they are trying to save the day.

3 Likes