Qwen3-235B-A22B-Instruct and Minimax-M1

You can just use Claude code or -insert X-

ah you right
last time i use KiloCode
and this agent mode sofar better then cursor agent

it also has codebase indexing feature

Your opinions should be seen by more people.

1 Like

In my testing it uses significant more tokens. Thats why it is more expensive.

2 Likes

Qwen3-coder is the first model on par with Claude Sonnet 4, in SWE-benchmarks. I tested it out on windsurf, it’s cheap and works well. Would be great to add it in cursor as Sonnet 4 is quite expensive.

On OpenRouter it’s just 1$ per million token.

3 Likes

These past few weeks we saw several OSS models get released and have capabilities on par with proprietary models: Minimax-M1, Kimi-K2, the Qwen 3 variants (reasoning instruct and coder), and more recently GLM-4.5 by z-ai. With the exception of Kimi-K2, none of these models were added to Cursor despite active community demand and new cursor version being released. Kimi was added but not on the MAX mode, has limited context window and lots of bugs in its implemntation within Cursor. We saw through testing that the Qwen variants (especially Coder) work great with Agent mode, no bugs in Cursor use (via Openrouter) and it has capabilities on par with Sonnet, and yet still wasn’t added to the available models on Cursor. My POV is that is really a loss for us as users and takes away from cursor usefulness.. Cursor team: what is stopping from adding these models ? they don’t take anything away from our current usage, actually would make us use Cursor more and not consider alternatives - these models are truly a value add. Please add them with agent, MAX, AUTO support: the models listed are all available via a variety of providers, have been tested left, right and center

2 Likes

hi @osho and thank you for the feature request.

We are regularly considering new models and would like to see even more community feedback. Feel free to share the feature request so others can upvote and express their interest.

I will pass the information to the team.

2 Likes

I have tried Qwen3-coder, it’s quite good would really like to see that in Cursor.

4 Likes

Thanks for the reply condor.
A quick look at posts and feature requests shows there s great demand for Qwen3 Coder. It s a shame the requests are on different posts so the votes are a bit spread out. There’s 100+ votes on the forum for Qwen3 and getting close to 100 for GLM model as well (and Minimax-M1).

Hopefully you’ll get to test and implement the models internally. Happy to provide my own tests with Agent mode if that helps.

cheers

1 Like

Thank you for linking the topics. I merged them into one thread and created an internal feature request for Qwen3-Instruct/Coder.

Ideally we would want separate threads per model as that way we can see better what models are sought after. So for any other models please make separate feature requests. (GLM 4.5 has already a thread)

2 Likes

Hello Cursor Team,

I hope this message finds you well.

I would like to kindly request the addition of the Qwen3-Coder-480B-A35B-Instruct model to the list of available models on the Cursor platform.

This model is an advanced MoE (Mixture-of-Experts) transformer with 480 billion parameters (activating 35 billion during inference) and is specifically optimized for complex agentic coding tasks, multi-step programming workflows, code generation, debugging, and tool integration. Its large context window (up to 256,000 tokens) and powerful agentic capabilities make it an excellent choice for developers aiming for scalable, efficient, and context-rich coding assistance.

Given Cursor’s focus on delivering cutting-edge AI-assisted programming tools, supporting this model would offer users access to one of the latest state-of-the-art coding AI technologies, enhancing coding productivity and automation capabilities significantly.

Thank You

3 Likes

Hi CursorTeam,
Could you add Qwen3-Coder hosted on Cerebras?
They boast a token generation speed to 2000 tokens/sec.

4 Likes

This is currently live on windsurf and on cline. Would love to have it in cursor.

1 Like

agreed this is amazing

Too bad the votes don’t get merged with the threads … we’d be at 100+ :slight_smile:

2 Likes

The votes got deduplicated from several threads, but we are still tracking them.

2 Likes

Any news regarding Cerebras Qwen3 Coder 480B support? Not working here.

“The model qwen-3-coder-480b does not work with your current plan or api key”

Kimi has been added straight without any votings or high demand.
Somehow for Qwen3 with actual demand and more popular than Kimi, after 2 weeks “we need to count votes and see if it’s popular”…

Please, stop taking all of your customers for 20IQ vibe coders, you clearly have your own reasons for not adding it atm, and it’s clearly not about votes…

Just like you removed thinking models from command+K because of “poor performance in inline chat”…

5 Likes

18 days since feature request…

Not sure why they are sleeping on Qwen3 Coder. It looks similar in quality to K2 (depending, in some tests I saw K2 rather bomb), faster, less issues with inference providers. Also K2 in Cursor has fairly often issues with tool calls, and even breaking like:

I am not really a fan of this “merging” specific posts, eg “Implement Qwen3-Coder” in favor of post “Qwen3-235B-A22B-Instruct and Minimax-M1”. Those are entirely different models. Is this just another cheap attempt to hide/suppress/silence/make forum more “positive”?

Just saw a bench, real use cases (from my understanding non-trivial codebases and asking agentic IDEs/CLIs to fulfill tasks, even whole features) and Qwen3 Coder has ranked rather good.

1 Like