LLM model selection syncs across multiple Cursor instances unexpectedly

Bug: LLM model selection syncs across multiple Cursor instances unexpectedly

When working with multiple Cursor instances, each tied to a different project, changing the selected LLM model in one instance immediately changes the model in the other open instances. This happens even for chats that were already open. This behavior is confusing and seems unintended.

Steps to reproduce

  1. Open Cursor and start a chat in Project A
  2. Select a specific LLM model for the chat (for example, Gemini 2.5 Pro in Max mode)
  3. Open another Cursor window or instance for Project B
  4. In the new instance, start a chat and select a different LLM model (for example, Claude 4 Sonnet or OpenAI o3, also in Max mode)
  5. Go back to the original Cursor window for Project A and observe that the chat model has changed there too

Environment

OS: Windows 10 with WSL2 (Ubuntu)
Cursor version: 0.51.1

Models used (always in Max mode)

Gemini 2.5 Pro
Claude 4 Sonnet
OpenAI o3

Impact

This issue interrupts my workflow and creates confusion when working on multiple projects with different LLM needs. I think model selection should be remembered per instance or per chat, not shared across all open Cursor windows.

1 Like

I came here to make the same request - I’m trying to use multiple instances, one for large-scale refactoring with long requests in Gemini and another for light work where I’d prefer to put it in auto for speed, but changing it in one immediately applies it to the other so I can’t do this effectively. It’s also not clear what would happen to an ongoing long query sent to Gemini if I change the model from the other window while it’s processing - will it stay in Gemini? Without knowing for sure I’m afraid to mess with it.