As the title says. Privacy mode is on yet I still see cursor’s AI is collecting information such as my prompts and project description.
I have decrypted the traffic and carefully analyzed all the requests, I noticed that Cursor’s AI is summarizing what the entire codebase project is about and sending this info to Cursor’s server. It does not only save the ‘prompts’, but it gives a detailed project description.
I would appreciate a comment from an admin
1 Like
Hi!
On privacy mode, we do not store your data on our servers. However, all requests still go through our servers before hitting OpenAI, because that’s where we build our prompt.
Would you be able to share some more details about the requests you are referring to?
For example, we have an endpoint called getHighLevelFolderDescription
. It gets called only if you select for your codebase to be indexed. The Cursor client includes a few of your files in the request, sends them to the server, and the server asks the OpenAI API to summarize what the codebase is about, and sends back that response. Regardless of whether you’re on privacy mode or not, that summary is not stored on our server. The summary may be sent up in connection to other requests to help give better responses, but, if you’re on privacy mode, is not persisted on our servers.
Does this explain what you’re seeing?
1 Like
I do not remember exactly what the endpoint was. I can check again later. But I remember that inside the POST request, was a summary of the project. It specifically said
'‘They are trying to build an app that allows them to control Google Analytics through voice’
Why the summary is already summarized in the background by AI in a POST request and not a GET?
Oh. This is probably from our chat summaries, then.
If you have a long chat, we summarize the start of it to make it fit into the prompt. Since we don’t want to incur a latency hit, we compute the summary before you send a request. And since we don’t want to store any of your code (or summaries) on our servers, we store the summary locally, and send it up with every subsequent chat request.
All of our server requests are POST requests. For this particular concern I don’t see how a GET request would make it any different.
1 Like