I am paid user and today I probably have clicked a hundred times the Resume button.
I am using claude-3.5-sonnet and I know Anthropic has capacity issues but this bad??
I am paid user and today I probably have clicked a hundred times the Resume button.
I am using claude-3.5-sonnet and I know Anthropic has capacity issues but this bad??
Hey, it was this bad! We hit a peak where a lot of requests were being rejected by Anthropic, but this should be resolved now. We are still working on resolving this capacity issue.
Thanks it was solved but now its back again.
Why typing in composer has a lag now?
For me when I keep trying to resume, it starts glitching like matrix script
@danperks There should be an option in the IDE to retry in case of timeouts and connection error. I think quite many of us ask LLM to do testing when we are not at our PC, at least I do before going to bed create a todo list and ask composer to follow it. In that way we will be using LLM in off-peak hours and don’t overload the backend systems.
can this be fixed by routing to my own anthropic instance / base url? is it a problem of congestion or a problem with anthropic?
@dibun This is an unfortunate side effect of the capacity limitations imposed on us by Anthropic. This specifically happens at the peak of Cursor’s usage each day, so my best recommendation would be to use a different model for a short while until the demand dies down.
@raw.works Ironically, using your own API key would fix this during these times of high capacity, as Anthropic provides us with a certain amount of infrastructure that is not enough for our use, but using your own API key would run the request as yourself, so would not be limited by our limitations. The downside is you may receive some rate limiting if you are a heavy user, and you would also have to pay for your API key accordingly.
too many of those
Will composer works with Anthropic API key? What are the peak hours of usage? I am based in EU in CET timezone.
Hey, unfortunatly, Composer does not work with just an API key, as we use some custom models that are only available with a paid plan.
What tool is failing in your previous screenshot?
i am an enterprise customer. can i combine my own anthropic api for better claude rates with the proprietary models? (for agent composer)
I don’t believe you can mix and match like this, if you enable an API key, all your requests for that provider (or every model, if you override the OpenAI URL), will be sent using the API key.
i think you are misunderstanding me.
“we use some custom models that are only available with a paid plan.”
if i use my own anthropic api key - will that turn off cursors proprietary models too?
if the answer is yes, then please consider the feature request to make the answer no.
I don’t believe so, as long as you use the built-in Anthropic model options, and not a customer URL for a different LLM provider. However, test this at your own cost as I may be wrong!
The only working solution for me is to create a new composer chat and delete the old one.
@danperks Do you have live graph that shows load in the back-end systems? so that I can plan when is the best time to run tests.
@danperks Created a new composer chat, it works fine for sometime and they it just stops and unless I type in it wont restart.
Can you create a ticket and solve this issue?
Hey, the Connection Failed
errors suggests there may be something else going wrong here, such as the use of a VPN or unstable network connection, as the error prints differently if the error is due to capacity issues with Anthropic.
Can you send over a request ID when you get a failed request, as I can use that to look into the request and see whats going wrong? You can do so with this guide: