I’m new to Cursor and just started exploring, so apologies if this has been discussed before.
About the “Auto” model selection, is there a built-in way (or planned functionality) to configure a dedicated routing LLM (local or cloud) for prompt analysis, instead of just using the current heuristic-based “Auto” router?
I’m thinking for complex prompts where heuristics might miss some nuances, it might be great to have a lightweight LLM evaluate task complexity first and route to the best model.
Ps..
I might be misunderstanding how the Auto/Premium routers work internally so any clarification would be appreciated if that’s the case.
Auto allows Cursor to select models that balance intelligence, cost efficiency, and reliability. It’s useful for everyday tasks. Premium, on the other hand, guarantees you a frontier model every time.
We don’t currently support plugging in a custom routing LLM, but we do expect Auto to only get better over time in routing to the best model for the task at hand!
Just to make sure I fully understand, a couple of follow-up questions:
1. So currently Auto mode uses an internal heuristic algorithm, and when using Premium it prioritizes frontier/premium models but still uses the same algorithm, correct?
2. Are there any plans to let users configure or plug in their own routing algorithm in the future (like a local LLM), or will model routing always remain an internal/locked process controlled by Cursor?