i think it may only happen when invoking custom workspace commands
.cursor/commands
for a specific example: in a previous chat i made some clarification (maybe with my /update-plan command), then when using the /update-plan command in a new chat, the LLM responds as if my new message contains information from the previous time I used the /update-plan command
the chats are in the same workspace
this happens frequently
i think i can reproduce it by just asking the LLM to tell me the content of the message. here is my update-plan command. the LLM response in the screenshot includes other junk from ealier conversations.
```
Hey, I ran into the same issue. In my case, I launched a chat, duplicated it, and then sent different commands into both duplicates in parallel. The context from one of the chats leaked into the other chats.
Here’s me prompting the original chat (chat A) to do something in /tmp/framer-slideshow-ssr:
I encountered a similar issue, but not across two separate conversations — it happened within the same conversation. After executing a plan build, or after multiple rounds of dialogue, if I restart from the middle of the conversation, the model still seems to remember the previous inputs or the earlier build results.(When restarting the conversation from the middle, the dialogue content after that node should be removed. This issue did not exist in previous versions.)