Anthropic off-shoring as a security violation

Does anyone know how to prevent my Cursor AI session from executing on a server farm outside of the US? If I were an Anthropic customer directly, I could opt out, but I do not see how to do that through Cursor AI. Is there a Preference setting somewhere that will absolutely prevent this from happening?

If not I may have to stop using Claude for coding before August 16.

Hey, thanks for the query.

Our only blanket statement is that we never host or use services hosted in China. While the bulk of our own infrastructure and that of the third parties we work with is in the US, many also have more latency-specific deployments in other locations outside of the US.

Our full breakdown of providers and rough geographical deployments can be found here:

However, with Privacy Mode enabled, we guarantee that your code, conversations, and prompts are never stored, shared, or trained on by us or our trusted third-party providers, regardless of the model you use within Cursor.

It is really important because any dual use technology that might be CUI, ITAR or MTCR would require US State Department export approval. Violations have legal consequences that can include blacklisting from government contracts, stiff fines and even jail time.