You can just use Claude code or -insert X-
ah you right
last time i use KiloCode
and this agent mode sofar better then cursor agent
it also has codebase indexing feature
Your opinions should be seen by more people.
In my testing it uses significant more tokens. Thats why it is more expensive.
Qwen3-coder is the first model on par with Claude Sonnet 4, in SWE-benchmarks. I tested it out on windsurf, itâs cheap and works well. Would be great to add it in cursor as Sonnet 4 is quite expensive.
On OpenRouter itâs just 1$ per million token.
These past few weeks we saw several OSS models get released and have capabilities on par with proprietary models: Minimax-M1, Kimi-K2, the Qwen 3 variants (reasoning instruct and coder), and more recently GLM-4.5 by z-ai. With the exception of Kimi-K2, none of these models were added to Cursor despite active community demand and new cursor version being released. Kimi was added but not on the MAX mode, has limited context window and lots of bugs in its implemntation within Cursor. We saw through testing that the Qwen variants (especially Coder) work great with Agent mode, no bugs in Cursor use (via Openrouter) and it has capabilities on par with Sonnet, and yet still wasnât added to the available models on Cursor. My POV is that is really a loss for us as users and takes away from cursor usefulness.. Cursor team: what is stopping from adding these models ? they donât take anything away from our current usage, actually would make us use Cursor more and not consider alternatives - these models are truly a value add. Please add them with agent, MAX, AUTO support: the models listed are all available via a variety of providers, have been tested left, right and center
hi @osho and thank you for the feature request.
We are regularly considering new models and would like to see even more community feedback. Feel free to share the feature request so others can upvote and express their interest.
I will pass the information to the team.
I have tried Qwen3-coder, itâs quite good would really like to see that in Cursor.
Thanks for the reply condor.
A quick look at posts and feature requests shows there s great demand for Qwen3 Coder. It s a shame the requests are on different posts so the votes are a bit spread out. Thereâs 100+ votes on the forum for Qwen3 and getting close to 100 for GLM model as well (and Minimax-M1).
Hopefully youâll get to test and implement the models internally. Happy to provide my own tests with Agent mode if that helps.
cheers
Thank you for linking the topics. I merged them into one thread and created an internal feature request for Qwen3-Instruct/Coder.
Ideally we would want separate threads per model as that way we can see better what models are sought after. So for any other models please make separate feature requests. (GLM 4.5 has already a thread)
Hello Cursor Team,
I hope this message finds you well.
I would like to kindly request the addition of the Qwen3-Coder-480B-A35B-Instruct model to the list of available models on the Cursor platform.
This model is an advanced MoE (Mixture-of-Experts) transformer with 480 billion parameters (activating 35 billion during inference) and is specifically optimized for complex agentic coding tasks, multi-step programming workflows, code generation, debugging, and tool integration. Its large context window (up to 256,000 tokens) and powerful agentic capabilities make it an excellent choice for developers aiming for scalable, efficient, and context-rich coding assistance.
Given Cursorâs focus on delivering cutting-edge AI-assisted programming tools, supporting this model would offer users access to one of the latest state-of-the-art coding AI technologies, enhancing coding productivity and automation capabilities significantly.
Thank You
Hi CursorTeam,
Could you add Qwen3-Coder hosted on Cerebras?
They boast a token generation speed to 2000 tokens/sec.
This is currently live on windsurf and on cline. Would love to have it in cursor.
agreed this is amazing
Too bad the votes donât get merged with the threads ⌠weâd be at 100+ ![]()
The votes got deduplicated from several threads, but we are still tracking them.
Any news regarding Cerebras Qwen3 Coder 480B support? Not working here.
âThe model qwen-3-coder-480b does not work with your current plan or api keyâ
Kimi has been added straight without any votings or high demand.
Somehow for Qwen3 with actual demand and more popular than Kimi, after 2 weeks âwe need to count votes and see if itâs popularââŚ
Please, stop taking all of your customers for 20IQ vibe coders, you clearly have your own reasons for not adding it atm, and itâs clearly not about votesâŚ
Just like you removed thinking models from command+K because of âpoor performance in inline chatââŚ
18 days since feature requestâŚ
Not sure why they are sleeping on Qwen3 Coder. It looks similar in quality to K2 (depending, in some tests I saw K2 rather bomb), faster, less issues with inference providers. Also K2 in Cursor has fairly often issues with tool calls, and even breaking like:
I am not really a fan of this âmergingâ specific posts, eg âImplement Qwen3-Coderâ in favor of post âQwen3-235B-A22B-Instruct and Minimax-M1â. Those are entirely different models. Is this just another cheap attempt to hide/suppress/silence/make forum more âpositiveâ?
Just saw a bench, real use cases (from my understanding non-trivial codebases and asking agentic IDEs/CLIs to fulfill tasks, even whole features) and Qwen3 Coder has ranked rather good.

