Been suffering constant hangs with “Generating…” all week, but today has been pretty solid for the past 4 hours, no hangs yet.
I literally have been spending 80% of my day trying to get cursor to work like it did when i first started using it 2 weeks ago. its like its not even the same program. it is CONSTANTLY stuck whether im in composer or an extension and it is constantly making mistakes to the point where its basically unusable like you said. so ■■■■■■■ frustrating.
Btw, the session above was started before you deployed your fix Wednesday, but the console stuff I posted was done after you deployed.
Hey @michaeljpento!
Sorry to hear you’ve been struggling. Have you started any new sessions in the last few hours and seen them stick?
If so, can you follow the steps to ensure the devs are able to look specifically into your case?
- Temporarily switch off ‘Privacy’ mode if you have it on, otherwise they get almost no information about your requests at all
- Enter a prompt and observe a ‘stuck’ generation attempt
- In the editor, press Cmd+Shift+P (mac) or Ctrl+Shift+P (PC) for the command drop-down, and type Report AI Action. From there you can get the request ID for your prompt attempt
- Share the request ID that failed and which LLM model you were using
That will give them the information for checking any other problems you might be having.
If you’re finding the models like Claude are getting confused, that’ll be something else unrelated. Keeping the LLMs on track as your codebase grows beyond the trivial is its own art; but there’s lots of good posts about how to achieve that. You might find the Idiot’s Guide To Bigger Projects a useful place to start.
Do let us know how you get on with the stuck requests, I know the dev team is really keen to ensure this is completely resolved.
Really glad to hear this seems to be working well for folks now!
Thank you all so much for your patience while the team’s been working on this, and for your continued updates, it’s much appreciated. Extra special thanks to folks who were able to share their request IDs, as that’s really the most critical piece for troubleshooting.
If you do see any subsequent stuck sessions, please do give us a shout!
was working earlier but has started happening again…
Are you able to follow the steps to share your request IDs, so the team can follow up?
(post deleted by author)
The Cursor team should be able to re-enable Anthropic/Sonnet API for slow calls to agents without needing you to disclose your IP etc.
Whatever is going on seems to be simply causing slow requests to default to the small model.
Composer barfs on me every once in a while but I don’t mind. I figure I’ve just fed it too much.
It’s an amazing product. Keep it up.
Same problem here now. Never had issues before but since this morning it does not generate anything.
Hi @MaxCurrent, not sure if you meant to reply to a different thread, as it sounds like you’re talking about a separate issue. If you’re having problems with model changes (rather than no response to prompts), I’d recommend searching the forum for existing topics and if necessary starting your own. That way your query won’t get lost in the conversation.
No, I’m talking about the OP’s experience.
Their post was not simply about a lack of response. That was only part of the problem:
They were having to re-educate the assistant on the project or fix issues that were caused in the project due to the assistant not adhering to instructions and needing to be reminded, making assumptions, making files that already exist, ignoring the documentation etc.
This is consistent with what I was seeing when it was diverting to cursor-small (however at last check it’s calling Sonnet for agent mode correctly)
Hi @Skyfire,
Sorry to hear you’re having trouble today. I know the dev team are actively engaged in hunting any and all causes for this right now.
It’s a really tricky issue to reproduce as it seems to be affecting only a limited number of users, so the Cursor folks are trying to get hold of specific examples of requests that didn’t go through.
A lot of users have ‘Privacy’ mode on, which is fine (me too!), but unfortunately that means their team will see absolutely no details about the request and so it’s almost impossible to troubleshoot.
Since you’re seeing it at the moment, if you can share the details of the failed request it would be really valuable:
Thanks for your patience, I’ll continue to share whatever I hear from the Cursor team.
Ah okay then that’s great, thanks for sharing! Glad to hear you’re seeing the right responses.
On that note, I’ve only ever seen model diversion happening in the past when there’s an actual outage at Anthropic/OpenAI. So if you do ever see anything odd about your responses in the future, do raise a topic with details as the Cursor team will be keen to investigate. Thanks
Yes, unfortunately .44 is so bad that I’ve found the only solution is to revert back to 0.43, been using that for the past month, sad to see it hasn’t been fixed even after all that time
I think the cursor team should add a notification flag that tells the user if they have reached the limit of the context window, instead of the composer assuming and starting to hallucinate. If I know I have reached the context limit, I can ask the composer about a summary of our conversation and where we currently are in the development, then take such a prompt to the next composer, where we start a new chat and fresh context window.
Hi all, we might have a fix for the ‘generating…’ bug on our backend, can anyone confirm if they are still facing this?
You may have to start a new composer to ensure we’re working with a clean slate, then if you still see this, do let us know!
Just now I’m stuck on “Slow request, get fast access here…” for several minutes, in my existing Composer session.
Is it possible to export an entire composer chat to a text file?
I wish it told me my number in the queue like it did in the past. I’m assuming it’s completely stuck though.
edit: finally responded, might of just been normal due to high demand.
I’ll let you know if I get forever stuck on “…” again.