"Your conversation is too long. Please try creating a new conversation or shortening your messages"

nope. I’m running 0.44.8. on Mac.

This feature was initialized a long time ago.

Yes. Its fixed.

No its not i am getting same error still

still happens in 0.44.9

Ah took a nice break over xmas, thought I’d get back into things and test again - updated - fixed one of my issues - but still getting the Conversation too long? For something that was gonna be a couple days seems like its either been missed or there are multiple issues? Or won’t this apply to older chats?

Edit:

After sending multiple messages, it seemed to have cleared the conversation too long message!

Version: 0.44.9
VSCode Version: 1.93.1
Commit: 316e524257c2ea23b755332b0a72c50cf23e1b00
Date: 2024-12-26T21:58:59.149Z
Electron: 30.5.1
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 24.2.0

@amanrs

In my case I no longer get the error, but in practice it is as if it happens, perhaps not as often, but it loses context, as if we started a new composer. I have also noticed that as this happens and the composer history size grows, the CPU and cursor resource consumption is so high that it practically stops, and this is resolved by starting a new composer.

I don’t know if it is related but in practice (except for the resource consumption), it is as if the conversation too long happens but without displaying the message and allowing to continue with the same composer.

That’s actually your warning.

When CPU usage gets high, ask the AI to update an agentcontext.md file that will train the next one in all the details of the current task, and cross-link to other relevant md files. Boot up the new composer, feed it the context file, then quiz it on the parts of the linked files that you need it to pay attention to and within 2-3 AI tokens you have a fully functional agent again.

This has been my golden ticket the last few days.

The suggestion about switching to a new conversation while getting the LLM to capture the details of the previous one seems handy. What prompt do you give it when you want it to capture as much useful information as possible @isarmstrong ?

That being said, I think there’s maybe a bit of confusion on the overall thread between that error message (which I think should never happen) and having a “long conversation” in a Composer session.

I might be wrong, but my understanding is this:

  • You can have as long a Composer/Chat session as you like
  • Cursor will try to make “smart” decisions about what gets sent to the LLM (which includes what parts of your codebase and the conversation are included). This is necessary to fit your exchange into the limit of the LLM’s context, because many codebases are way larger than that.
  • The Cursor algorithms should make sure it never sends more to the LLM than it can handle.
  • At the moment, that’s not working perfectly, and under some mystery circumstances, the size of the message that gets sent is too large for the LLM (Claude, GPT-4o or whatever).
  • You get the “too long” error as a result.
  • Because the LLM is stateless and sees each new submission as a new ‘conversation’ we get the slightly unintuitive wording in the error message.

So it’s not that the Chat/Composer session is too long, it’s that the “condensed” version of prompt+context sent in that particular submission exceeded some size limit, which is down to Cursor’s attempt to pack in the relevant data and it overreaching. That will be the bug they’re aiming to fix, I believe.

I mean this is all surmise and conjecture, I could be way off base, but from my understanding of the main problem Cursor solves (packing a ton of info into a modest message to an LLM each time you submit), it seems like that may be what’s happening here.

TL;DR:

I think: Your Composer session isn’t too long. The particular set of data most recently sent to the LLM is (and that’s not something you can control, hence the need for a bugfix).

Is the conversation to long only happening in yolo mode or is happening for everyone who’s using just the normal agent mode to.

For me I created am llm_context file that basically gives the agent notice that it’s not to change file names,modules, and for anything it’s thinking about building, to check directory for that first. And then I have a docs folder that completely outlines my progress, project structure, end goal, and a few other technical build structures and ideas.

helps but still is very painful to see the conv to long message. It also is a little diffuse when you get the tool call message. It seems the agent gets a little lost

Not just YOLO. I only enabled YOLO for the first time today, but I was getting Too Long errors yesterday.

1 Like

And again today :confused:

1 Like

Since yesterday I have the same issue . No model is usable anymore.

This is ridiculous. I’ve now gotten to the point where i’m prepared for it. Typing starts to get slow shortly before the composer dies. I’ve been building a project.md file that i have the composer add to periodically to capture context. When I have to start a new composer, i have it review that file first and then we have a long conversation so it can gain context. But then i still get fundamental issues like it creating new files and routes because it didn’t know we already had them. I’ve lost at least two-three days on this in the last two weeks. Where’s the fix?

I feel you, I’ve listened to quite a few podcasts from the creators and they swear it doesn’t get …less smart the longer the convo. But they should interview people like me who are complete idiots when it comes to code, I say that because overly intelligent folks tend to know how to use this better and they may not come across the same errors, but I think it for sure gets jumbled up 3/4 of the way into a long convo and it’s def not at as sharp as it is in the beginning

1 Like

Hi @danperks when is this supposed to be fixed? I encountered the issue today and had to start all over from the start in a new composer

If the composer is creating new files or routes that already exist, it’s probably not able to find the correct files for context. Are you making sure that your project.md has the file names added to it?

Something else you can do is, at the end of a session, have the AI write more condensed summaries into a project-status.md so you’re consuming less tokens when feeding context. Also maintaining a project-context.md (or something similar) that simply has file paths with feature comments. You can then point the AI to it whenever it’s getting confused or trying to re-create existing files or documents. (you can also add a rule in .cursorrules to have it check that file before doing any operations)

LLM’s eventually hit a context window limit, so the important thing is to periodically have the LLM refactor the context into smaller packages, kind of like compressed save states, so that you free up more tokens for additional context.

1 Like

Yep, doing all of that actually. I keep the project.md file up to date and it has a live file structure section. I usually package everything up after a few hours of work, have the composer write a summary and start a new one. The whole process of getting a new composer up to speed takes about 10 minutes.