Currently those two important features means privacy must be turned off, which means cursor will train on our code snippets, that can be thousands or hundreds of thousands of tokens in max.
This isn’t necessary if we just need to give permission for code to be stored on external servers. Please create a mode that decouples training from storage because the status quo is prohibitive. I can’t use major features for work, so I can’t justify expensing it. This situation will get even worse as background agents become a more frequently used primitive (and I’m sure other features pop up).
But, I understand that need to develop better models, so how about a discount for those who opt into code training?
Thank you for your feature request, kindly note this was already explained in the forum by the Cursor Team.
For now those features are in Beta and in order to check issues and improve the service they are only available with Privacy turned off as otherwise its not possible for Cursor Team to see the details of issues.
They also mentioned that the plan is to provide those features also with privacy mode turned on once they are tested enough and come out of Beta.
Please clarify where you saw any statement that Cursor will use that information for training models as no such claims were made from what I know.
Come on, don’t be coy. It’s almost explicit there, and from an interaction on x I saw with a team member working on background agents (I won’t call them out here)
Not just in the highlighted portion, but in the second paragraph as well.