Do Cursor team dont want add anything related to China AI?
Well, Kimi K2 Instruct is available in cursor. They just need to support the new Thinking model for K2.
Among all TopāTier models, we have:
-
Claude 4.5 Sonnet ā still the most robust for real softwareāengineering tasks.
-
Composer, released recently, is by far the fastest, leaving the others far behind, even those that perform well in softwareāengineering tasks.
-
Haiku is also an alternative with a slightly lower cost, but not by much, and is close to the two above.
-
The new GPT 5.1 Codex High is available for testing, although it remains at the same price point.
Then there is Kimi K2 Thinking, which competes headātoāhead with all of them, offering balanced speed. If we consider your partner for running models that are not from the big vendors (i.e., āFireworksā), it already delivers more than 100 tokens per second on Kimi K2.
The real differentiator is its pricing: $0.6 per 1 M input tokens and $2.5 per 1 M output tokens, which truly leaves the others far behind in terms of cost.
Even Haikuāoften touted as the best costābenefit option among the aboveācosts twice as much as Kimi Thinking, while Kimi Thinking scores on par with GPTā5 High as a general LLM and very close to Claude 4.5 Sonnet as a softwareāengineering assistant, all at a much lower cost.
āThe proof that most users are Vibe Coders is that they donāt even know how to add a āPā and keep begging to be able to add a certain model in here.ā
+1 (+
?)
Is this supposed to be saying we can add the kimi 2 thinking model? I tried adding a custom model but it only gave me an option to add the name of it and thatās it. If you know a way to do it, please do share with us pathetic vibe coders.
Itās simple, just use the API key of a provider and add an endpoint, and it will work.
But then we would have to pay that provider directly instead of it being integrated, correct? (Iām not looking to pay more or less, just convenience). Also, will the integration be as smooth as Cursor doing it? Is there some secret sauce they get to add in to make it better at using tools like the @browser, for example, or is that generally going to work?
Where do you add the api key and endpoint? In models, I see ā+ Add Custom Model.ā All it does is let you enter a name for the model. The API key sections underneath it offer the ability to add an Override OpenAI Base URL, an Anthropic API Key, and a Google API Key. There is also an Azure OpenAI and AWS Bedrock section. Thatās all I have. Nothing about adding an API key for a custom provider or endpoint.
This comment is ret*rded, excuse my language.
There is a huge difference between using a supported model which had itās parameters optimized by Cursor for their IDE, and just slapping an API key and praying it works (spoiler it will not work as you expect out of the box, it will just cause you frustration and waste of time).
I pay 20 USD per month for 5,000 calls per day. Isnāt that good? Another thing, GLM Minimax QWEN all work super well without loading failures via API. I can even make a video about it. I pay 20 USD per month for 5,000 calls per day. Isnāt that good? Another thing, GLM Minimax QWEN all work super well without loading failures viaā¦
As I said⦠thatās the thought of a vibe coder, friend, lol. Because of your comment, Iāll make a video soon and post it here. Version 2.0 is more compatible with these models than you might imagine. As I said⦠thatās the thought of a vibe coder, friend, lol. Because of your comment, Iāll make a video soon and post it here. Version 2.0 is more⦠As I said⦠thatās the thought of a vibe coder, friend, lol. Because of your comment, Iāll make a video soon and post it here. Version 2.0 is moreā¦
If it is so easy and obvious, how about sharing your knowledge with us vibe coders?
Where do you add the api key and endpoint? In models, I see ā+ Add Custom Model.ā All it does is let you enter a name for the model. The API key sections underneath it offer the ability to add an Override OpenAI Base URL, an Anthropic API Key, and a Google API Key. There is also an Azure OpenAI and AWS Bedrock section. Thatās all I have. Nothing about adding an API key for a custom provider or endpoint.
An obvious theory is that it will kill their business model. Cursor essentially take a share of the token usages and the more token users use the more profitable they are. Thatās why theyāve been pushing so hard on anything that can increase token usage (agents, max mode, etc). If it turns out Kimi or other models can achieve 80% of frontier model performance with 20% or even lower costs, the ācommissionā to cursor will significantly decrease.
And we are getting Gemini 3 Pro in real time
Thatās what I figured.
how? explain to us, please. no need for a video
because itās not possible from a fresh install of cursor 2.0.
thereās no way to configure the base url from custom models. and if i try to override the openai base url, thereās a error message saying that model does not work with my current plan (Pro plan)
ok. it worked. iām able to run Qwen, but needed to override OpenAI models. so, while qwen is enable, none of openai model works
Again, none of what you said matters. Obviously I can add API keys to cursor and use different models that is not the point. Stop trying to be a white knight defending a company unless you are a paid shill then continue please.
Anyways, for anyone else reading, when you connect a 3rd party model to Cursor or any other IDE, it will most likely work, however it will be super frustrating and underpowered compared to baked in models; the reason why is that models added by Cursor or other IDEs directly have their parameters optimized for the IDEās tool usage. I.e. if you ask an LLM to read file x or write to file y or do anything other than a Q&A it will either:
- Fail
- Tell you it did, when it didnāt
- Do it but mess up the entire file
- Ruin your repo
- Go on an endless loop and waste your credits
This is what really happens, I donāt live in a bubble like the user above, or maybe he only adds models for Q&A who knows.
As of this moment there is no SOTA model that is cheap in Cursor and as another user pointed out it is not in Cursorās financial interest to provide us with cheap SOTA models as their 20% cut would take a hit, as 20% of $20 a day on Sonnet is better than 20% of $0.2 on Kimi K2.
That is the truth, you donāt need mental gymnastics to figure it out. Itās our job as end users to request whatās in our best interest, if they listen they listen and we win, if not so be it.

