On-demand "Continue with summary" + custom summary prompt

Feature Description:

  • Continue with summary in a new chat by clicking a button / key binding.

Feature components:

  • Continue with summary button
  • Either reserved file names for cursor/rules or an additional field in the Settings/Rules for custom prompt

Explanation of the current workflow:
A powerful workflow I use all the time has the following steps:

  1. I use the chat until I feel like the performance starts to degrade (usually, 5-7 messages).
  2. I send my last message:

write a detailed summary of what you’ve done so far. the summary should be sufficient to continue in a new chat. then update the @Plan.md

(note: the plan.md is simply a document where the LLM keeps track of the implementation progress)

  1. I copy the generated summary by hand and prompt a new chat with it and also include additional context like so:

Summary from last chat
Your task: Continue the implementation as described in @Plan.md. Start by doing a, b, c.

How this workflow could change with this feature:

  1. Prompt the model to update Plan.md
  2. Click “Continue in new chat”
1 Like

I thought about that too. Getting also into the halluciination limit depending on complexity of task and amount of details.

Does the summary help in your case? (Not for me as the complexity is high and summary loses details i want)

For now i found other users posts that helped me avoid this issue. But its still complementary to your suggestion.

I found that project setup is very important. I use extensive documentation in the process.

Cursor rules (always apply)

  • Behavior rules: how the model should behave. I had grok 3 with deep search look into community discussions for the model i use to write it. this helps the model avoid its common pitfalls
  • Informed responses: just explicitly tells the models that it has a bunch of tools and it better use them
  • project rules, PRD-inspired for the model to understand overall project structure

The plan.md is extremely helpful and i always keep it up-to-date by simply asking the model once in a while to mark its progress an update action items if the list has changed in the process.

And then summaries and new chat as described.

With gemini 2.5 pro, this has been incredibly productive over the past few days. I made 2 weeks worth of progress in this time.

Hope that helps :slight_smile:

1 Like

Cool yes, thanks!

Im not using behavior rules but setting a persona, but yours is more detailed based on the model as well.

Found also that mentioning the tools (especially if you use MCP) helps it. Once 3.5 forgot how to use the CLI :slight_smile: but was easy to fix.

Will definitely check PRD. And will have to find time for 2.5 as well.
For now chugging with 3.5 smoothly along.

Hey guys,

based on what @T1000

Let’s pair zero-shot or one-shot prompting with strict token limits to minimize unnecessary deviations. For instance, if you’re working with Claude 3.7 Sonnet, cap the output tokens to ensure precision without overcomplicating things and use lightweight models for initial drafts or exploratory work to save resources and reduce hallucination risks.

1 Like

@kleosr agreed.

I’m already reducing prompts to only necessary details, incl. rules to avoid ‘misunderstandings’.

1 Like