Error: Your Conversation is too long

this problem need to be fixed asap

same, I keep getting that from yesterday fix it please

1 Like

Hey, it’s possible that your chat is overloaded with context. Have you tried creating a new one? Or is this happening in a relatively new chat? If so, you might be attaching too many files or folders. It’s also possible that you have multiple MCP servers connected. If you’re using rules, that could also be affecting it.

Shutting down all MCP servers will reduce this frequency, but this doesn’t address the underlying issue. After the version update, the frequency of these messages has clearly increased significantly.

1 Like

I just realized the “auto” model might be defaulting to gpt-5. I’m switching back to claude-4-sonnet to see if this stops happening so frequently.

I get this error constantly now after the update. I assumed it was because GPT 5 was using up all the tokens or whatever, deselcted gpt 5 from models and it still happens. Now I’m trying to play around with models to see what’s causing it but it’s ridiculous, completely ruined the work flow I had and I’m looking for an alternative now. Literally can’t complete a single prompt now without this error

2 Likes

Thanks everyone for the info, we’ll look into it.

1 Like

Great, now I am getting this too. Cursor has become especially unusable overnight. If it’s not a terminal hang, it’s this message.

2 Likes

It seems like the error is supposed to refer to the user’s message length, since it suggests reducing the length of your messages. But I am getting the issue mid way through receiving a response from the LLM. Maybe it’s as simple as the IDE counting the LLM response characters toward the user’s character input limit? If that’s the case, in normal circumstances I couldn’t imagine ever getting the error, since if I had to type everything the LLM generates even for a simple prompt, my fingers would fall off first. So perhaps there’s an intended character limit but it’s being miscounted at the moment.

Same here in auto mode

Me too! I am using it with memory bank and AUTO model. I haven’t finish the planning in PLAN mode. The alert already pops-up. I just reply in 3 times and no more than 100 words for each times. The update is crazy.

How it pass the internal testing @cursor ?

Version: 1.4.3
VSCode Version: 1.99.3
Commit: e50823e9ded15fddfd743c7122b4724130c25df0
Date: 2025-08-08T17:34:53.060Z (1 day ago)
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.6.0

1 Like

Hello smart world, i’m a newbie but im reading each report trying to find out why all of a sudden my conversation is too long when previously i ran way longers and more complex stuff and it was fine. my messages are quite simple and short but no matter the lenght it cuts me off with the same ■■■■. is this a bug that will get solved or cursor just mad at me

@MTurner did that help?

its the same here. on a mac. Since updating, the agent stope quite often leding lot of bugs and repeated tasks

I thought it was a problem with my computer’s storage. This issue would occur after several conversations were initiated, and it had nothing to do with the length of the context.

Having the same issue
Posted here - Cursor v1.4 - Release Discussions - #53 by James_Barker

1 Like

It seems this is by design.

Auto mode does not count towards your usage in Ultra plan and WAS very good at getting the job done. So this new limitation forces us to select a model which does count towards usage. I believe this change is by design so Cursor doesnt have to revise its pricing. Again.

This is definitely a bug. Close and restart cursor, start brand new chat, remove ALL context attachments, give a simple one-liner prompt, and the error will occur before the agent and LLM are able to process even the first half of responding to your prompt. Doesn’t seem to matter if you ask it to just create a plan, or make code changes, or are just doing research, etc.

VERY SEVERE BUG.

It is also a progressive one. When this first started happening, it was “occasional” but it happened several times throughout the day. The next day it happened more and more often. I started using MCPs, mainly Linear to do some story work, and the problem became even more pronounced, and I was unable to complete even simple linear tasks in a single chat.

After that, I started trying to close out Cursor and restart, thinking it might be a memory issue. It was not. The issue persisted at its previous frequency, but continued to become more and more frequent. It eventually started occurring on EVERY SINGLE BRAND NEW Agent chat, within the first couple of prompts within each chat. Then it started occurring within the first prompt, before any real work had been done.

At the current state, I can’t even get past the first few back and forth interactions between agent and llm, before the error appears. That is definitely not normal.

I am wondering if it is possible to downgrade back to 1.3.9 without any issues? I know that Cursor stores a lot of local data/metadata in a local database (discovered just how much data is stored in there about a week ago when I moved to a new M4, and tried to figure out a way to export and move all my “Saved Memories” from the old mac and pc to my new mac…there is a LOT of data stored locally!) I am just wondering if there might be issues with that stored data, if I try to roll back to Cursor 1.3.9.

I have to figure out something. I’m dead in the water here, totally unable to use the agent, and I need to get this working today… If anyone has any tips or tricks to downgrade, I’d love to hear em. Thanks!

@drjjw I honestly do not think this is by design. It has all the markers of a LEAK Type BUG. I don’t think it is just a memory leak, it seems to be some kind of persistent leak of something, as it persists across different loads of the app.

But this issue is so debilitating and severe, that it is preventing the normal use of the app entirely. That is bad for Cursor’s business model, not good. I’m pretty sure its a bug. Hopefully they will find a way to resolve it soon.

@MTurner @chuckatrain I had a ton of problems with GPT-5 when i first started using it (which, I think, was probably after I upgraded to 1.4.x). I then switched to auto, which at first was using Claude Sonnet, but then it seemed to use other models. I thought it was GPT-5, based on some of the output characteristics, however one of the Cursor reps in some thread stated that GPT-5 is not included in Auto right now…

I also found some posts (maybe on linkedin) that gpt-5-fast was working well for them. So I decided to give it a try. Since switching to a single model, I have not yet run into the “conversation too long” error. YET. I just started fresh here, and it may be that I encounter it within the next half hour or so. However, it was happening pretty much immediately before, and I’ve been able to get through several prompts here in this first chat (no other chats or chat tabs yet in this session), which I haven’t been able to do since yesterday.

So maybe this issue is that its specifically related to Auto model use. I have been trying to use Auto, as it seems to give you better overall usage out of the Ultra and Pro+ plans, than using a single model. If this is an issue that only affects Auto, then I wonder if there is some issue with Auto model selection, maybe a new bug, that is not updating the context window size when a model is selected by the agent? If so, that would still be a fairly severe bug. If a model is being chosen that cannot handle the current context or conversation length of the chat, when the chat has been ongoing, that hampers using Auto to avoid overage charges on your plan when you blow through the model-specific requests.