Workaround for lack of memory between chats

I guess this is a combo of showcase, workaround & feature request.

I’ve been hoping for project memory to eventually be retained between chats, especially as chat length increases and subsequently degrades the quality of responses.

Basically I’ve just created a projectContext.md file top-level in my codebase that describes the project / codebase and any current tasks being worked on. Essentially it just contains any information I would want to be persisted between chats. So I can just @projectContext.md file when I start a new chat and the model is up to date on what we’re working on.

Whenever I stop working or want to start a new chat due to length degrading responses, I can ask the model to update this file based on where we are currently at. Of course I can edit this file at any time manually too.

I couldn’t find any other posts where someone has done this, so thought I’d share. I’ve included a very rough template below to get you started. Of course you can structure this file however it suits you.

I’ve only just started using this myself, so would love to hear how it goes or any other related tips for managing project context. Thanks!

# Project Context

## Project Overview

- Overview here

## Key Components

1. Database System etc

## Current State

- Core functionality implemented for etc

## Ongoing Tasks

1. **Task here**: Do interface stuff

## Last Updated

{{ current_date }}
3 Likes

Great idea, a similar idea of ‘canned prompts’ arose in this topic:

It even uses a similar syntax!

I’ve also been pondering a prompt that lets me know when the chat gets to a certain size and creates a summary of key points from the current chat, that I can copy and paste into a new chat.

There is always that tension when you are working on something that’s taking longer than expected between:

  • I don’t want to loose all this context

  • But I really should start a new chat cause I can tell the LLM is producing poorer results and with less acuity

Most of the time, I think starting a new chat is going to lead to a better resolution in less time. And having an automated summary would help make the jump to the new chat.

I like the idea of writing summaries to a file as well.

As an aside/reference, I’ve linked to this guys videos a few times in the past, but there is something cool/efficient about the idea of ‘prepping the model’ with pre-prepared, well organised, context for the chat, and he seems to use this approach often in his videos:

https://youtu.be/VrHXjvikBrY

Maybe it’s just a good practice to help yourself get focussed and clear about what you want to achieve beforehand.

Another idea that has popped into my head a few times recently is if there would be any value in being able to execute actions from the ‘canned prompt’ files I spoke of earlier, or expose an ‘event API’ within Cursor of some sort? But that probably leads into a larger conversation about security etc.

1 Like

Feature idea: .cursorrules to support @ references

1 Like

Thanks for sharing! Didn’t spot that post.

Actually that last part you mentioned has me thinking: before they had native Anthropic support, I used openrouter.ai API to connect to my Anthropic API key manually.

Potentially one could whip together a small app that stores project information and memory, so the model passes through the memory management app via openrouter when you submit a chat message. Then the model can update the memory in that app as required. I’d need to take another look at openrouter and cursor to see what sort of API request structure one would be working with.

Maybe they’ll implement memory by the time I get something like that working anyway (if it even would work), but it’s fun to ponder ideas and workarounds regardless.

1 Like