Complex Context - TIP!

Or: Why Duplicate Chat is the Unsung Hero of Cursor

So you have an awesome idea for a project, and it’s a bit more involved than “Hello World” this time.

You’ve discussed it with the assistant, explained roughly what you’re after, maybe gone back and forth a few times ironing out the details (bonus tip: if you’re not already inviting the agent to “ask pertinent questions” a couple of times before starting the planning and coding, you’re missing a trick).

Now you’re ready to start. What comes next is critical.

Noob Level

Okay thanks now write the code please.

Experienced Level

Okay great, let’s write that up into a comprehensive plan in mydocs/foo.md and then we’ll make a start

I've written that up for you

Great, start with Phase 1 please.

Guru Level

Okay great, let’s write that up into a comprehensive plan in mydocs/foo.md

I've written that up for you

User uses DUPLICATE CHAT to create more than one copy of the current context

In chat 1, user says “Okay let’s start with phase 1”.
Then when phase 1 is complete and tested, user goes to Chat 2, duplicates it again (ready for Chat 3 later), and then in Chat 2 says

That’s great. We’ve worked on Phase 1 already now, you’ll see it’s complete. Let’s continue with Phase 2.

Repeat for Chat 3 and so on.

:exploding_head:

Now every time user (that’s you) works on a new phase, all that rich complex discussion that never made it into the write-up is RIGHT THERE, primed and ready to go.

Bonus bonus tip

If you really want to go all-out best-in-class for that million dollar money-spinner project, borrow the same approach, but duplicate before writing the plan doc. Then choose two different big models to write a plan, each without seeing their rival’s attempts.

You can even use something like ChatGPT to compare and contrast the plans and choose a winner if you like (from experience at the time of writing, Claude [tested with Opus 4.1] captures more breadth and nuance, but GPT5 writes more concrete and focused plans). Your mileage may vary.

Then invite your favourite big model to look for discrepancies, do a bit of clarification, then merge them into your super-plan, and use the technique above for each phase of it.

Success guaranteed!*

* Success not actually guaranteed.

15 Likes

By the way, if you’re wondering where Duplicate Chat actually is…

It’s under the three-dot menu at the bottom of your most recent response.

6 Likes

Thanks for the tip. I haven’t considered using it like this before.

2 Likes

Good tip

3 Likes

:rofl::rofl: :rofl:

thanks

1 Like

Here’s a slightly simplified visual representation of why you might find this worth trying. Your mileage may vary, as always, but don’t knock it until you’ve tried it.

Diagram not in any way to scale. From experience, even working on very complex codebases, the amount of the context taken up by the human-led ‘discussion and planning’ phases is much less than that used by the agent-led coding. This means using the duplicated context approach shouldn’t use up too much of your breathing room.

The above suggestions are based on personal experience and not an official statement from Cursor.

2 Likes

i have used now in a project which have 3 phases to work and complete the project and it was great!! Just one question: Can I use this approach into a project which have 5 different and independent phases and use this in parallel?

1 Like

Realistically it would automatically do all of this for you but it doesn’t yet.
What I do, is i use one of the large models regular plans like $20/m for unlimited request for example. I talk to that model as much as I want since it doesn’t cost anything. I have it build all the project guidelines and everything I need as a primer. I use the primer from that and put it into cursor.
Any other questions about the project that isn’t code based (because the other model can’t see the code) i ask it in its own chat since it’s ‘free’ part of the subscription. So i can figure out what next steps to do, make another primer (ask it to generate AI primer to give to another model to complete some process) and put that in cursor. This has worked for me keeping the context low in cursor and also saving costs as even just talking to the model about planning the project in cursor is expensive , and the free/included ones in cursor are not good with this.

1 Like

I would think so, yes. I’ve definitely had it working on entirely independent task phases that way.

The only thing to watch out for is if you have phases that would try to edit the same files – much easier if you don’t run into that. At the very least, you’d have to keep clicking to continue when it finds a file that’s been edited elsewhere.

In principle, though, with enough clear separation between phase tasks, I think you could.

2 Likes

Definitely! The ‘plan externally’ approach is great for greenfield projects where you can do the work without needing to read an existing codebase. Love the smart approach you’ve outlined there for making your dollars go further.

For circumstances similar to that, I’d usually use a browser extension for grabbing the whole chat as Markdown, and then offer up that file alongside the plan document. Seems to give you more of that early reasoning and rationale (why we’re using this technique not that), and decreases the chances of the LLM defaulting to the approaches it saw more frequently in its training.

1 Like

I have been using a plan based approach for a month or so now. Started out much like you recommended here, however more recently its morphed into something even more prescribed and precise.

I have named my process PRACT, short for PRACTical, based on this:

=====

P - Plan: Work with the agent to generate an initial plan covering your needs. Get the initial requirements on the ground and in context. Its best to have plans developed in a multi-phasic approach.
R - Research & Refine: Work with the agent to refine your plan, and research (usually in another chat tab) any concepts the LLM mentions that you don’t fully grasp or that are new to you. Refine further, until you are fully satisfied in the plan.

--v

A - Actualize & Apply: Get your plan stored, in a document or ticketing system (I use Linear stories myself). From the actualized plan or stories, have the agent implement each phase (or in a ticketing system: story/substory/task) of the plan.

C - Complete: Once you have implemented a phases (story/tasks), work on completing everything. Have the agent check the actual implementations vs. the phase (story/task), identify discrepancies, and resolve them. Make sure any required testing is done. Lint and format. Commit!

^-- [repeat A-C for each phase (story/task underneath an epic)]

T - Terminate: Once you are done with the entire body of work (full plan or epic), clean things up. Close out any open files, close out any active agent tabs, start a clean, new agent chat in the remaining tab. This ensures that you don’t capture unnecessary or unwanted context once you start the next body of work!

=====

Following PRACT has, for me, resulted in each phase (now mostly stories in Linear) being completed often in a single invocation prompt: “Move story ZYX into in-progress, assign to me, and implement.” The ENTIRE story (or phase, if using a markdown document with phases approach) will be handled, usually quite fully, by that single prompt. I will then often have a couple of follow up promots to check the implementation against the story requirements and to fix any discrepancies. Then to verify that the entire test sute runs properly. Once I iterate through all the stories under an epic (or tasks under a story, depending on your ticketing/tracking software), I then have a few more prompts to clean up (lint, format, etc. applied against @Recent Changes), and finally commit (which I also do with the agent.)

This has dramatically ramped up the speed at which I can work, the effectiveness of the agent and LLM, reigned in chaotic behavior, etc. It is also important to make sure you still have rules set up to govern code style, software design (principles, patterns, practices, etc.), architecture, as well as rules to govern linting and formatting (so you don’t end up with things like the agent auto-formatting every file in the repo!), story creation and management, committing, etc.

PRACT has accelerated my flow and greatly reduced the amount of mishaps. A few tips for plan/story creation:

  • Scope: Make sure you have the agent create a scope for each plan, or even each phase (epic/story in a tracking system.)
  • Requirements and Acceptance Criteria!
  • Verification: Every plan and phase (epic and story) should have verification requirements to cover exactly what to test and how (unit, e2e, integration, each if required, etc.)
  • Details: Its good to let the agent add details it thinks are relevant, so long as you verify. Having it add extra details, code, config details/requirements, expectations for environment variables, documentation requirements (or documentation that may be required to implement), etc.

These extra details help guide the agent, keep it on track and focused, so it can do just the work that is needed, and do it as fast and effectively as possible. When you do finally instruct the agent to Apply a given phase (or story), one more thing to make sure you are doing:

  • ATTACH RELEVANT CONTEXT! While the plan (phase/story) itself will bring a lot of detail and alllow the agent to work efficiently, it is still important to attach relevant context. This particularly includes @Docs that you may think will be required, and if you can attach relevant code areas, code files, etc. it will reduce the amount of manual searching (i.e. grep, git grep/log/show/etc, find, etc.) that may be required for the agent to implement the phase or story. So still, always, CONTEXT IS KING!

Implementation of large bodies of work, with multiple stories underneath a single epic, can be done in an hour or so now, when they were taking a day. Planning takes some time too, depending on exactly what you are doing.

6 Likes

Great write-up, this deserves to be a topic all of its own :+1:

I daresay you’re operating at a more advanced level than the target audience for my ‘quick tip’ here, but I’m very grateful to you for sharing this level of detail about your process!

It’s actually quite similar to the approach I tend to use on larger, more ‘serious’ projects, and once you get into that kind of ticket-centric methodology, you’re dead right about scoping being crucial. And it’s a good shout on focusing with @Docs too.

I deliberately glossed over the research and refinement stage in my post above as I was avoiding complicating the original message, but there’s no doubt that it’s a whole topic by itself. Without refinement, the assumptions we make and the assumptions the LLM makes can be significantly misaligned. So it’ll build you what it guessed you wanted, and it’s rarely quite what you were after. I may well write another post about that soon, to help people get out of the stage of “I prompted it with ‘build me the next PayPal’ so why am I not a multi-millionaire yet?” :slight_smile:

But again, thank you for sharing, you’ve clearly put a lot of time into finding a flow that works well for you, and that’s really great to see!

3 Likes

I’m a relatively new user of Cursor, to be honest. The main thing is, the PRACTical approach came out of just trying to manage the issues I’ve been encountering (which has mostly just been over the last two months since landing a new job that is heavily invested in agentic software development.) When I first started, keeping the agent and llm from going off and doing their own thing, was really tough, and I had a number of mishaps and strait up bulldozed code, which wasted time. This approach came out of trying to figure out how to eliminate (or at least reduce) that waste. I suspect there are levels even more advanced than what I’ve done here, as I still have some waste, and I am an optimizer…so, future improvements are ensured.

So, I am still working here. Something I am finding fairly consistently now, as I’ve started performing a post-implementation “check” of impl vs. story reqs. I find that in most cases, there is at least some discrepancy between the implementation, and the actual story requirements. I haven’t yet figured out a solution to this…

Most of the time, it is something MISSED. Most often, with the verification section of each story, which involves what kinds of testing to implement and what tests are required. However sometimes there are missing bits from the main requirements as well. In a couple of cases (for bigger stories), entire chunks of the main requirements were just skipped.

I suspect this has something to do with the non-deterministic nature of LLMs, however, it is still a bit odd, as the instructions in the story are fairly explicit about what needs to be implemented. I find it odd, that it seems EVERY story (or phase from a .md plan file, of wich I still had a few) ends up with at the very least some small missed discrepancy, or perhaps a slightly mis-implemented requirement.

So anyway, to account for this, after every implementation completes (which will get 95-98% of every story in one shot!), I now run a check prompt, and if (when) there are discrepancies I’ll issue one or more prompts to have the agent resolve each.

Strange, but maybe just the non-deterministic nature of LLMs?

Still refining. Still improving. :wink:

4 Likes

Small addition

It would be useful to have an AI agent update a separate file called progress.md, where it would record a summary after each step. This way, the agent with the new context will have access to more detailed information about prevous steps.

2 Likes

I literally didn’t know this

I always saw “duplicate chat” and wondered what is this.

This just shows how there may be a so many nice features in a software but because they are not documented or shown properly, no one uses them.

Thank you, this is very valuable.

2 Likes

Fantastic!!! One more example I have to follow!!! Thank you!!!

2 Likes