Kimi K2 Thinking in Cursor

Do Cursor team dont want add anything related to China AI?

3 Likes

Well, Kimi K2 Instruct is available in cursor. They just need to support the new Thinking model for K2.

2 Likes

Among all Top‑Tier models, we have:

  • Claude 4.5 Sonnet – still the most robust for real software‑engineering tasks.

  • Composer, released recently, is by far the fastest, leaving the others far behind, even those that perform well in software‑engineering tasks.

  • Haiku is also an alternative with a slightly lower cost, but not by much, and is close to the two above.

  • The new GPT 5.1 Codex High is available for testing, although it remains at the same price point.

Then there is Kimi K2 Thinking, which competes head‑to‑head with all of them, offering balanced speed. If we consider your partner for running models that are not from the big vendors (i.e., ā€œFireworksā€), it already delivers more than 100 tokens per second on Kimi K2.

The real differentiator is its pricing: $0.6 per 1 M input tokens and $2.5 per 1 M output tokens, which truly leaves the others far behind in terms of cost.

Even Haiku—often touted as the best cost‑benefit option among the above—costs twice as much as Kimi Thinking, while Kimi Thinking scores on par with GPT‑5 High as a general LLM and very close to Claude 4.5 Sonnet as a software‑engineering assistant, all at a much lower cost.

@deanrie

3 Likes

ā€œThe proof that most users are Vibe Coders is that they don’t even know how to add a ā€˜P’ and keep begging to be able to add a certain model in here.ā€

+1 (+ :100: ?)

Is this supposed to be saying we can add the kimi 2 thinking model? I tried adding a custom model but it only gave me an option to add the name of it and that’s it. If you know a way to do it, please do share with us pathetic vibe coders.

It’s simple, just use the API key of a provider and add an endpoint, and it will work.

But then we would have to pay that provider directly instead of it being integrated, correct? (I’m not looking to pay more or less, just convenience). Also, will the integration be as smooth as Cursor doing it? Is there some secret sauce they get to add in to make it better at using tools like the @browser, for example, or is that generally going to work?

1 Like

Where do you add the api key and endpoint? In models, I see ā€œ+ Add Custom Model.ā€ All it does is let you enter a name for the model. The API key sections underneath it offer the ability to add an Override OpenAI Base URL, an Anthropic API Key, and a Google API Key. There is also an Azure OpenAI and AWS Bedrock section. That’s all I have. Nothing about adding an API key for a custom provider or endpoint.

This comment is ret*rded, excuse my language.

There is a huge difference between using a supported model which had it’s parameters optimized by Cursor for their IDE, and just slapping an API key and praying it works (spoiler it will not work as you expect out of the box, it will just cause you frustration and waste of time).

4 Likes

I pay 20 USD per month for 5,000 calls per day. Isn’t that good? Another thing, GLM Minimax QWEN all work super well without loading failures via API. I can even make a video about it. I pay 20 USD per month for 5,000 calls per day. Isn’t that good? Another thing, GLM Minimax QWEN all work super well without loading failures via…

As I said… that’s the thought of a vibe coder, friend, lol. Because of your comment, I’ll make a video soon and post it here. Version 2.0 is more compatible with these models than you might imagine. As I said… that’s the thought of a vibe coder, friend, lol. Because of your comment, I’ll make a video soon and post it here. Version 2.0 is more… As I said… that’s the thought of a vibe coder, friend, lol. Because of your comment, I’ll make a video soon and post it here. Version 2.0 is more…

1 Like

If it is so easy and obvious, how about sharing your knowledge with us vibe coders?

Where do you add the api key and endpoint? In models, I see ā€œ+ Add Custom Model.ā€ All it does is let you enter a name for the model. The API key sections underneath it offer the ability to add an Override OpenAI Base URL, an Anthropic API Key, and a Google API Key. There is also an Azure OpenAI and AWS Bedrock section. That’s all I have. Nothing about adding an API key for a custom provider or endpoint.

An obvious theory is that it will kill their business model. Cursor essentially take a share of the token usages and the more token users use the more profitable they are. That’s why they’ve been pushing so hard on anything that can increase token usage (agents, max mode, etc). If it turns out Kimi or other models can achieve 80% of frontier model performance with 20% or even lower costs, the ā€œcommissionā€ to cursor will significantly decrease.

1 Like

And we are getting Gemini 3 Pro in real time

2 Likes

That’s what I figured.

how? explain to us, please. no need for a video
because it’s not possible from a fresh install of cursor 2.0.
there’s no way to configure the base url from custom models. and if i try to override the openai base url, there’s a error message saying that model does not work with my current plan (Pro plan)

ok. it worked. i’m able to run Qwen, but needed to override OpenAI models. so, while qwen is enable, none of openai model works

Again, none of what you said matters. Obviously I can add API keys to cursor and use different models that is not the point. Stop trying to be a white knight defending a company unless you are a paid shill then continue please.

Anyways, for anyone else reading, when you connect a 3rd party model to Cursor or any other IDE, it will most likely work, however it will be super frustrating and underpowered compared to baked in models; the reason why is that models added by Cursor or other IDEs directly have their parameters optimized for the IDE’s tool usage. I.e. if you ask an LLM to read file x or write to file y or do anything other than a Q&A it will either:

  • Fail
  • Tell you it did, when it didn’t
  • Do it but mess up the entire file
  • Ruin your repo
  • Go on an endless loop and waste your credits

This is what really happens, I don’t live in a bubble like the user above, or maybe he only adds models for Q&A who knows.

As of this moment there is no SOTA model that is cheap in Cursor and as another user pointed out it is not in Cursor’s financial interest to provide us with cheap SOTA models as their 20% cut would take a hit, as 20% of $20 a day on Sonnet is better than 20% of $0.2 on Kimi K2.

That is the truth, you don’t need mental gymnastics to figure it out. It’s our job as end users to request what’s in our best interest, if they listen they listen and we win, if not so be it.

1 Like

sorry

1 Like