Hi,
I am wondering if it is a good idea to start a new chat for separate task or keep using the same chat? What is the best practice?
Thank you.
SP
Hi,
I am wondering if it is a good idea to start a new chat for separate task or keep using the same chat? What is the best practice?
Thank you.
SP
Not necessarily.
My take is, at the moment I do not see any degradation of quality, to the contrary I see that related tasks are getting solved better.
But ….I have a separate task/todo file for tasks I run + a document for what the task is about.
This is part of a general/always included rule which I activate by mentioning !EPIPE in chat and it creates both the description document and the task-list file:
###!EPIPE (Execution Instruction Set)
-–
#### 1. Sanity Check
* Validate logic, coherence
* Detect contradictions, bias, paradoxes → pause, request clarification
* Break down complex requests → smaller sub-tasks
* **Default to Minimum Viable Product (MVP): simplicity is non-negotiable**
-–
#### 2. Pipeline Steps. FOLLOW IN STRICT ORDER.
1. Analyze request → extract *clear, minimal* requirements
2. Research options → favor *low-complexity, high-impact* paths
3. Develop concise solution plan → **MVP > full featureset**, avoid overengineering
4. Save plan → `./docs` (DOC-FILE) → file name starts with `date +“%Y%m%d_%H%M%S”_`
5. Save tasks → `./tasks` (TASK-FILE) → file name starts with `date +“%Y%m%d_%H%M%S”_`
* **Task files must contain**: full path to associated plan file
* **Plan files must contain**: full path to associated task file
* Both paths at top of file for easy cross-referencing and navigation
6. **Implement with Integration-First Testing** → Tests must catch real bugs, not just pass
7. **Tests FAIL when code is broken** → Use real database, real functions, real flows
8. Check off completed tasks → TASK-FILE
9. All tests pass → accept
10. **Before marking DONE → re-run all relevant test suites (Python + TS) → must pass**
11. Use MCP tools when needed
12. Update DOC-FILE → list created/modified files
13. Repeat if scope grows — but **never violate MVP-first rule**
The degradation of quality argument depends a lot on particular model used. Some models simply work (understand whole context) better than others on certain context lengths.
There is no best practice per se, and I don’t really see a big push in Cursor to establish such best practices, although there are some cool community examples.
It also depends a lot on personal preferences. F.e. I like keeping the same chat window until it goes over the context limit, because it makes it feel more like a conversation with a coworker, rather than just using a tool. It helps me to understand better my own vision for the feature and project as a whole. Especially if I set the model up for it (via project rules). It’s like a rubber duck on steroids, invaluable for a single developer.
Eu I prefer to create a portion of the chat, like duplicating it, he understood, so I do the subtask and go back to the main one and when he injects another question, instead of clicking on revert, I click on don’t revert
That’s actually brilliant, I’m stealing it.
I am not sure if I understand you correctly. Let me confirm this.
Message 1
Message 2 <—come back here, send a new prompt and choose not to revert
Message 3
Message 4
Is it what you mean? Thanks.
With the new memory option activated, I see that you can start a new chat and it remembers your preferences. We need to change paradigm, and change habits and beliefs to get the most out of this thing.
I always think that the more context and the more complex the logic, the easier it is for AI to hallucinate, so it is necessary to keep a series of related tasks in one dialogue.
I disagree. Memories are a very early alpha, only for marketing purposes called beta.
There is not even an option to manually edit, export, or even copypaste memories. A model, asked to write down all the memories to a file, answers that it’s unable to even read them.
Also, it’s only available with privacy mode off.
That might be depending on the model too. Just a thought. Claude 4 Sonnet Thinking, does a great job. I have seen it finding pieces of memory which were very relevant to the subject.
You can ask AI to remember something for you. The memories are user editable.
I have asked AI to remember something and it did that. You can introduce such tidbits as part of any task.
(color is just a theme)
If AI sees something should be remembered, it will ask for approval to store it in memory.
Asking AI for a change everytime we want something changed, and hoping it will get it right, is not the same as being able to manually edit memories at will, and in bulk.
Especially when we can’t even copypaste the memories, so if we want to provide the exact new phrasing, we have to rewrite the whole memory.
This discussion makes me think about how much memory we have as humans and how we manage them and how big a memory the AI must manage to cope with our racing thoughts.
We can not expect that level of efficiency by no means yet. We must apply some personal hacks for now to make the ends meet.
If the memory grows in size so much, that it will cost a fortune to use it, it will become useless.
A hack is needed here.
@gustojs I’m testing memories and it in upcoming EAP versions they should be copy/pasteable, even for partial replacements, there are also additional UI updates from my and others feedback coming.
I can see how having it fully manually editable (outside UI) would be practical, though as they are synched (user level across machines) this may be safer done in UI.
Yes as @cocode mentioned, another issue will become size of memories.
If memories become editable then what’s the difference between memories and rules?
E.g. I asked AI to remember details from one of my chats and it wrote following memory:
Cursor Rule files must reside directly in .cursor/rules (no subfolders), be limited in size, limited in number, and contain only critical information to avoid polluting context.
Editable memories is a way to correct any misinterpretation by AI. It doesnt mean users should fill in long rules into memories. it would just pollute context and confuse AI.