0.51: "Memories" feature

Hey, you can find it in the Cursor settings under the Rules tab. Open the settings using Cmd/Ctrl + Shift + J.

1 Like

Can you elaborate on why this is an issue? The description text says that it will “Automatically create user rules based on your chats”. These user rules are already stored locally on our device, not with the model or anything like that. So it doesn’t seem like there would be any difference to creating the rules ourselves, especially if we are already creating them by asking the agent to do it (e.g. using /Generate Cursor Rules). That latter approach should be protected by Privacy Mode already.

I’ll echo the comment above: this is feeling like a dark pattern that I fear may be spreading (see background agents).

2 Likes

Just to follow up here, I believe memories are actually stored on our infrastructure, as they need to be indexed and accessed at scale to decide what should make it into each prompt out of what could be 100s of memories!

No dark pattern. Just designed with performance and security from the start :slight_smile:

2 Likes

I am sorry, but for most businesses, it will be impossible to use this feature. If you work for your client or for your company, usually an NDA does not allow you to share the code with others.

“For vibe coders that can work. They don’t care and may share their poor code.” However, the actual production code must be legally protected.

If you are thinking about making your product better for actual developers, you should provide a memory feature in privacy mode, too.

3 Likes

Tbh I still don’t see the point of indexing anything.

The codebase is supposedly indexed, yet I never see the models to actually know anything about it unless I directly provide details via project rules or attach the files to context. And without those details, Cursor doesn’t even know the folder structure (as evidenced by a separate setting for it).

So the “we need to index it” argument is a bit unclear.

Is wish there was some kind of a write up on what actually happens when something gets “indexed” by Cursor.

3 Likes

Can you please explain how that’s any different than the code indexing, which can be done with privacy mode on? In fact, it seems safer than code indexing.

Edit: no rhetoric or platitudes. Be specific

5 Likes

no, indexing works. they most likely also have knowledge graph, not a simple RAG as agents can search codebase using NL queries.
but you need to ask AI to build the context first. Indexing does not mean that all codebase is in the LLM context all the time. It just helps in search.

1 Like

The difference is that with the memories, both the contents and the index is stored in our infrastructure.

With your code:

Indexing your code allows some of Cursor’s tool and under-the-hood behaviours to have a better implicit understanding of where the most relevant sections of your code lives. While there is no manual way to interface with this, it’s this index which helps Cursor decide what makes it into a prompt, and what gets cut (note that MAX mode isn’t quite the same).

We can do this with privacy mode, because we don’t actually store your raw code, just the indexes to help us navigate through what is sent within each request

With memories:

Both the memories themselves and the indexing of those memories happens on our infrastructure, not in your device. This is because with a large quantity of memories, we need the indexing to find the most relevant memories, but when we go to use that index, it has to be applied against every memory you have.

With code, the indexing only applies to the code itself, so if you @ 3 files, that’s all we have to sift through. With memories, 100s of them could exist and need sorting through with the help of the index.

Therefore, it makes the most sense for this process to, at least for the short term, run off-device, to avoid any client-side degradation and allow for much higher limits of memories.

The Future

Memories are not even released to everyone yet, so are as new as a feature can be! As such, it’s more than likely that we will see iterations on how this works, both in effect in your workflows, but also in its implementation. While I don’t want to guarentee anything, I’d be surprised given a high volume of requests that we don’t add a privacy mode-compliant implementation for this in the future.

I understand the limitation is not optional, but has been done to ensure the features stability and usefulness for those who can use it, vs making it accessible to more people, but not useful enough to be worthwhile!

2 Likes

Starting from version 0.51, window scrolling has started to drop frames and get stuck, which affects the experience. I have dropped back to 0.50 and hope to fix it in the next version.

Thank you for the explanation, but I feel like some of the details are superfluous. I understand you store the memories, but why? You allude to the number of index chunks, but how is it any less with code when you’re searching against an entire codebase? On fact, it’s more. Doesn’t make sense

1 Like

We definitely need a privacy-mode-compliant implementation of the memories feature, even if it comes with some extra overhead. It’s only fair to give users the choice.

3 Likes

The Memories feature looks very promising. Unfortunately, I can’t try it :cry: NDA does not allow code sharing.

1 Like

In the Meet Cursor 1.0 video, they say

In the future we imagine cursor learns more and more from your usage of it, and for teams, cursor can learn from what one team member is doing to help another.

Does that mean the concept of a team “memory” is not yet a feature, but you are planning on adding it? Or is it already a feature, and they are just saying that cursor is going to learn more and more from the “memories” feature in general as time passes?

Just to follow up here, I believe memories are actually stored on our infrastructure, as they need to be indexed and accessed at scale to decide what should make it into each prompt out of what could be 100s of memories!

No dark pattern. Just designed with performance and security from the start :slight_smile:

I still don’t understand why it can’t work in private mode. Could you elaborate? Will it ever work in private mode?

1 Like

This is new (however the dropdown doesn’t differentiate between the privacy modes.)

It seems like a step in the right direction but I don’t think the description is sufficient for those who require privacy mode rather than prefer it.

The opposite is true.

In the short-term while users have few, if any memories it should be on device and by the time ‘hundreds of memories’ becomes a problem there should hopefully be a better solution or users can be forced to allow code storage to add more memories at that time.

Many users will accept potential client-side degradation as a trade-off for privacy but it should be be their choice, not made for them.

Oh, nice, I hadn’t seen that yet.

The docs say:

With Privacy Mode with Storage enabled, your code won’t be used for training by us or any third-party. However, we may store some code data to provide features like Background Agent and other functionality that requires code storage - giving you access to these great tools while keeping your code out of training workflows.

Cursor Team, could you clarify what this entails? More specifically:

  1. What data specifically do you store and how? Is it encrypted at rest?
  2. How long do you keep this data for?
  3. Can we see what is the data that you’re keeping? Giving us control over what is stored would make privacy-sensitive users more keen to try/use this.
  4. What are the “other features” that are enabled when you switch to this mode?

Now this is gone??

I can’t find this dropdown anymore, and in Cursor I can only find a dropdown to enable/disable Privacy mode. Was it changed recently?

I wanted to switch back to full privacy mode and now I’m not even sure what happened here.

My Memories get deleted every time i close cursor! ;( How can i make this persistent please? @AdminhuDev