Slow requests are too slow - use case example

Lets say I ask the AI to perform a task.
It takes 3 to 4 minutes to respond.
Then I notice it performed the task incorrectly or forgot some detail.
I ask it to fix the issue.
it takes 3 to 4 minutes to respond.
I notice another issue and ask it to fix the issue.
it takes 3 to 4 minutes to respond.
I notice a color is wrong in the interface.
it takes 3 to 4 minutes to respond.

assuming it takes 1 minute to type a request to the AI.
In 20 minutes, I spent 4 minutes working and 16 minutes waiting.
so at this point I’ve wasted 16 minutes just waiting for responses.
So lets multiply that by 3 to fill 1 hour. so in 1 hour I work 12 minutes and spent 48 minutes waiting for responses.

Now apply this to a full day of work of 8 hours.
You are spending 6.4 hours waiting for responses and only 1.6 hours working.

At this point, it might be faster to not use Cursor and instead hand code everything.

2 Likes

Yes that happens if you use archtect and openAi models like gpt4 or claude , and i believe you used all your 500 fast requests and also used +1000 slow messages because for me at that stage cursor became unusable, try the 2 new models ( DeepSeek V3 or R1 )

1 Like

Due to our limitations with Anthropic (read here), Claude models are expected to have high queue times, especially if you are a heavy user.

As @THE-RAVEN has recommended, if you do not want to purchase any more fast requests, your best bet is to try another model, as both OpenAI and DeepSeek models should have much shorter queues than Anthropic models.

1 Like

sounds good, can you let me know which model is comparable to Claude 3.5 ? I’ve tried the GPT-4o and it is unable to comprehend 90% of what Claude can, it cannot code properly compared to Claude and has trouble understanding basic requests.

1 Like

deepseek V3 is so fast and also almost the same level as Claude ( im using it for both backend and swift ) and R1 is in my opinion better than Claude 3.5 sonnet