Yeah I just wanted to mention that allow us to select these models specifically, not just “Auto”
Also there is this model called minimax m1, seems underrated. I hope Cursor is having a look on it. It’s pretty cheap and good
based on testing, you are wrong. these models (all 3 listed in this thread) are doing great at different levels, so each would be used for different tasks. The Qwen models are great at agentic with enough context window for anything coding related on agent, MAX and AUTO modes; minimax is great too but way slower (due to thinking) although smart so it s great for codebase wide searches and discussions.
The coder and Instruct variants should both be added to Agent / MAX / AUTO modes as well as minimax-m1.
no you can’t: a) last time I checked, Cursor disabled Agent mode for openrouter models; b) going to settings to switch openai key on and off is not practical
you can. used qwen coder via openrouter yesterday with agent mode and it worked perfectly, reading files, using tools, etc.
you re right that it doesnt always work or with all models using external api keys.
I agree that the more supported models the better! In my testing though, at least in my codebases, K2 was successful in 9/13 of my own testing methodologies and Q3C only 4/13. It works better in Cline/Roo/Kilo than in native VSC IMO. Your coding style may see totally different results as always.
Well, thanks for the heads up, it works with “qwen/qwen3-coder:free” indeed.
(Switching requires disabling Cursor models in order for model verification to pick the one from openrouter, so it’s like a dozen clicks every time.)
why only pro+?
the pricing slightly similar with gpt 4.1
it should accessible for everyone
BS, maybe it can be useful for some small changes. but for this, Gemini can do the same ( and better ) for a really small price. In the end of the day everyone that takes coding seriously will take Claude as the model. Not by chance, Claude Code is exploding in popularity.
Adding it doesn’t matter; they’ll connect using the worst method
then don’t use it. your ‘maybe’ tells me you haven’t used it yet. I have, and it does work well enough. As far as gemini, yes it is great for coding on AGENT/MAX mode with full token access - minimax remains 10x cheaper, and could be used with full token access (1M) for discussions, ideation, etc for tasks that require large context like documentation, etc. In any case, having more models can only be better for us, and leveraging OSS ones is great to be able to do more and keep requests with proprietary models for the difficult tasks.
Thanks for your input though.
Also, temperature=0.7 , top_p=0.8 , top_k=20 , repetition_penalty=1.05 is the official recommendation of code generation.
I’m wondering if cursor will following this setting.
this just makes sense
I was wondering if we were gonna have Qwen3-Coder-480B-A35B-Instruct model soon implemented ? I heard it has really good agentic capabilities so It would be really great to be able to use it inside Cursor ![]()
Charles.
On par with claude 4 sonnet, should be added to cursor but requires a bit of configuration (using it as a custom model with openrouter doesn’t work that well natively on cursor hence why I am requesting it).
Yes, I tested Gwen3-Coder and the new Qwen3-235B-2507 thinking model, and they provide a solid foundation and a reasonable context window which technically could be augmented to up to 1 million tokens. Their API cost is very manageable and should provide a good foundation. This should also be considered to maybe replace the auto mode for any subscription user. I’m getting the feeling that the model agnosticism approach is breaking apart, and we should rather transition to a model router based approach with highly customizable context engineering. An alternative would be to give the user the ability to create their own model routing rules with the possibility to override Cursor’s system prompt completely. Additionally, there could be a social media aspect to this whole approach which would allow users to publish, share, discuss and copy other users’ workflows, model routing, and custom agents. This would be a nice addition and promote community building.
add the Qwen3 Coder integration api to the cursor app version please
I originally started using Cursor because it put me ahead of the competition by giving me affordable access to the latest top models for coding.
The AI space is re-structuring itself completely every 3 months, having a new top coding model take over every month and a half, and it’s taking Cursor a month to add any given model.
If you can’t give me access to the top models for coding I will simply go elsewhere. There’s no shortage of other services trying to provide what you originally provided.
I like your UX, but I’m not gonna keep paying to get quota-blocked halfway through the month every month just to use inferior models.
People paying for AI coding tools want the cutting edge. If you can’t provide it people are going to move on.