Started getting extremely frustrating generating forever bug again in 44.9

Last 24 hrs having essentially the same exact problem as before, completely unusable unstable generations freezing semi-randomly, on 0.44.9 , really frustrating, Ive lost count of the hundreds of gens that have just stopped or never start

see the original bug I posted, its not fixed completely:

still happening in 44.11-- the issue I have most is that once it freezes on generating forever, I cant generate again unless I restart the app— its almost like maybe its trying to contact anthropic, its cut off and never times out, but somehow blocks the generation thread altogether… the composer thread is borked from that point on which breaks my ability to iterate on messages, I need to be able to retry messages, and this causes me to have to start all over again, only to be blocked by the same exact issue again.

We need to have the ability for when we enter in our Anthropic API key in settings, for models, along with OpenAI (I have all three) – when Cursor is bogged down, it shoul allow me to use my API keys as a default backup - however, I know that this means loss of agentic context and such – but the ability to have the MODEL SETTING as a callout in the settings within the workspace profile would be nifty.

Hey, we have had a few reports of this book and we’re trying to track it down at the moment. Keep an eye out for a message from me as we may need some help with this!

Hi all, we might have a fix for the ‘generating…’ bug on our backend, can anyone confirm if they are still facing this?

You may have to start a new composer to ensure we’re working with a clean slate, then if you still see this, do let us know!

@danperks,

I’ve been busy and couldn’t reply immediately, but this bug has been causing significant issues for me all week—it’s still unresolved and has cost me a lot of time and credits over the weekend. I suspect the gravity of the problem isn’t fully appreciated because it involves permanently lost work.

I consistently encounter this freeze with composer agent mode AND normal mode, and I finally managed to capture a session of it failing completely. I left the recording running because this issue occurs about every 20 minutes. Once a thread freezes, it’s lost for good and all progress is gone.

I’m a developer, so I’ve recorded a detailed video that may be the best repro you’ll get. In the video, you’ll see:

  1. Initial Frozen Thread– I start off with an already-frozen thread to show how it fully hangs. In that case, I had to restart to get past it.
  2. Regular Workflow – I continue working in Composer Mode (no Agent Mode here) for about 20 minutes. During that time, everything functions but occasionally struggles—typical whack-a-mole issues while coding with LLM tools… but composer isn’t doing great here either, numerous apply mistakes.
  3. New Freeze– At the end, I get another freeze that kills the thread entirely, causing me to lose my work again.

It’s very frustrating and undermines productivity and creativity. The user experience becomes especially poor if you’re left waiting indefinitely for a response that never comes. Although it was reported over a month ago and was said to be fixed, the issue seems to have reemerged almost immediately. For reference, it doesn’t appear specific to Anthropic—4o and o1-mini also exhibit this behavior. From my perspective, it might be tied to the “shadow indexing” aspect of the client.

Please escalate or prioritize this bug—it’s a critical issue. It might be related to my particular development toolset, but I’m not doing anything out of the ordinary: just using Jest test tools and ESLint. I’ve also opened DevTools at the end of the recording, which might shed more light on what’s happening.

Thank you for looking into this. I’m happy to share any additional details or testing insights you need to help diagnose and solve the problem… wasted quite a bit of time tho.

Hey, not saying there isn’t an issue here, but it seems your inputs to the composer are very long, which is causing it to take a long time to reply. As a safe rule of thumb, if you send off a prompt and it takes over 60 seconds to reply, then there probably issue an issue here, but 5-10+ seconds is not uncommon when provided with a massive message, or a lot of context to sift through.

When working with large messages like this, I’d recommend routinely starting a new composer session, to ensure the message history doesn’t get too long, as this will contribute heavily to the speed in which the LLM replies.

The original bug reported in this post was for prompts that never responded and just hung forever (hours, if the user let them). We believe this is fixed, so if there is an issue, it’s not the one described in this post.

1 Like

@danperks not sure if you watched all the way to the end, but the starting prompt was hanging forever and the last post hung forever-- these aren’t 5-10 second s delays, I’m familiar with those, these are forever bugs, I’ve tried leaving it for hours-- and it still happens on the 3 machines I use it on frequently… I use long posts like this in the other AI IDE and it never freezes. These seem like time outs calling tools. Im not sure if bugs are being conflated, but these are the same problems I’ve been experiencing for over a month.

@danperks another key observation is the long posts that have locked/frozen forever start immediately if you restart cursor, there is no delay. Consistently. Thats the startup time, very quick and you can see it clearly twice in the video. This is 100% not the time taken for the model to respond, I’m extremely familiar with sonnet response times for long context lengths. I’ve also had freezes on the first post, even if its short— context length is a red herring here. I’ve personally spoken to 5 other devs that have experienced this more than 10 times.

Hey, admittedly I wasn’t able to watch the video all the way through, thanks for pointing this out.

Would it be possible for you to get us a request ID when you find one of these chats hanging?
It’s pretty simple, but the guide to do this is here:

The caveat is you will need to have Privacy Mode disabled before you attempt to make the request that hangs forever, as we have very limited visibility about requests with privacy mode turned on (hence the point of it!).

Will this, I can see where the request is getting to and why it is hanging!