Last 24 hrs having essentially the same exact problem as before, completely unusable unstable generations freezing semi-randomly, on 0.44.9 , really frustrating, Ive lost count of the hundreds of gens that have just stopped or never start
see the original bug I posted, its not fixed completely:
still happening in 44.11-- the issue I have most is that once it freezes on generating forever, I cant generate again unless I restart the appâ its almost like maybe its trying to contact anthropic, its cut off and never times out, but somehow blocks the generation thread altogether⌠the composer thread is borked from that point on which breaks my ability to iterate on messages, I need to be able to retry messages, and this causes me to have to start all over again, only to be blocked by the same exact issue again.
We need to have the ability for when we enter in our Anthropic API key in settings, for models, along with OpenAI (I have all three) â when Cursor is bogged down, it shoul allow me to use my API keys as a default backup - however, I know that this means loss of agentic context and such â but the ability to have the MODEL SETTING as a callout in the settings within the workspace profile would be nifty.
Hey, we have had a few reports of this book and weâre trying to track it down at the moment. Keep an eye out for a message from me as we may need some help with this!
Iâve been busy and couldnât reply immediately, but this bug has been causing significant issues for me all weekâitâs still unresolved and has cost me a lot of time and credits over the weekend. I suspect the gravity of the problem isnât fully appreciated because it involves permanently lost work.
I consistently encounter this freeze with composer agent mode AND normal mode, and I finally managed to capture a session of it failing completely. I left the recording running because this issue occurs about every 20 minutes. Once a thread freezes, itâs lost for good and all progress is gone.
Iâm a developer, so Iâve recorded a detailed video that may be the best repro youâll get. In the video, youâll see:
Initial Frozen Threadâ I start off with an already-frozen thread to show how it fully hangs. In that case, I had to restart to get past it.
Regular Workflow â I continue working in Composer Mode (no Agent Mode here) for about 20 minutes. During that time, everything functions but occasionally strugglesâtypical whack-a-mole issues while coding with LLM tools⌠but composer isnât doing great here either, numerous apply mistakes.
New Freezeâ At the end, I get another freeze that kills the thread entirely, causing me to lose my work again.
Itâs very frustrating and undermines productivity and creativity. The user experience becomes especially poor if youâre left waiting indefinitely for a response that never comes. Although it was reported over a month ago and was said to be fixed, the issue seems to have reemerged almost immediately. For reference, it doesnât appear specific to Anthropicâ4o and o1-mini also exhibit this behavior. From my perspective, it might be tied to the âshadow indexingâ aspect of the client.
Please escalate or prioritize this bugâitâs a critical issue. It might be related to my particular development toolset, but Iâm not doing anything out of the ordinary: just using Jest test tools and ESLint. Iâve also opened DevTools at the end of the recording, which might shed more light on whatâs happening.
Thank you for looking into this. Iâm happy to share any additional details or testing insights you need to help diagnose and solve the problem⌠wasted quite a bit of time tho.
Hey, not saying there isnât an issue here, but it seems your inputs to the composer are very long, which is causing it to take a long time to reply. As a safe rule of thumb, if you send off a prompt and it takes over 60 seconds to reply, then there probably issue an issue here, but 5-10+ seconds is not uncommon when provided with a massive message, or a lot of context to sift through.
When working with large messages like this, Iâd recommend routinely starting a new composer session, to ensure the message history doesnât get too long, as this will contribute heavily to the speed in which the LLM replies.
The original bug reported in this post was for prompts that never responded and just hung forever (hours, if the user let them). We believe this is fixed, so if there is an issue, itâs not the one described in this post.
@danperks not sure if you watched all the way to the end, but the starting prompt was hanging forever and the last post hung forever-- these arenât 5-10 second s delays, Iâm familiar with those, these are forever bugs, Iâve tried leaving it for hours-- and it still happens on the 3 machines I use it on frequently⌠I use long posts like this in the other AI IDE and it never freezes. These seem like time outs calling tools. Im not sure if bugs are being conflated, but these are the same problems Iâve been experiencing for over a month.
@danperks another key observation is the long posts that have locked/frozen forever start immediately if you restart cursor, there is no delay. Consistently. Thats the startup time, very quick and you can see it clearly twice in the video. This is 100% not the time taken for the model to respond, Iâm extremely familiar with sonnet response times for long context lengths. Iâve also had freezes on the first post, even if its shortâ context length is a red herring here. Iâve personally spoken to 5 other devs that have experienced this more than 10 times.
Hey, admittedly I wasnât able to watch the video all the way through, thanks for pointing this out.
Would it be possible for you to get us a request ID when you find one of these chats hanging?
Itâs pretty simple, but the guide to do this is here:
The caveat is you will need to have Privacy Mode disabled before you attempt to make the request that hangs forever, as we have very limited visibility about requests with privacy mode turned on (hence the point of it!).
Will this, I can see where the request is getting to and why it is hanging!