Kimi works with Agent Mode?
yes it does but from my testing it errors out a lot 4/5 chats probably needs some fine tuning
âImplement Kimi-K2 LLM, or Cursor loses over 30% of its users!
not sure what you mean but i can use it just fine with privacy future(legacy) and new one
Thanks for reminder, because Kimi-K2 LLM provided by Moonshot does not have privacy mode.
ah yes they donât use the moonshot as provider but fireworks which is i belive us company which they partnered with
Literally how? This isnât even about API use. They can still paywall it behind the 20$/mo subscription. This isnât about saving money, itâs about focusing the money into labs that they have deals and agreements with under the table. Itâs perfectly possible to make money. Perhaps less money, but happier users. It comes back around with loyalty and good karma. I am a happy peasant if they hook up all the OpenRouter models automatically through their own servers, and Iâll continue to pay for the subscription.
yeah, same thing, after limits are out, using cursor stops making any sense. I found myself using github copilot more (still have my yearly plan). They at least have gpt4.1 included, itâs not the best thing for coding, but at least itâs there without limits. Tried kimi k2 and it looks really promising
My experience with Kimi k2 in Cursor has been awful.
Cursor runs Kimi on Fireworks and it crawls compared to Kimi k2 on Groq with Opencode or Cline. Iâd tolerate the slowness if the output matched, but the drop in quality, accuracy and âintelligenceâ is huge. With Opencode or Cline I actually get what Moonshot promises: almost no toolâcall screwups, more calls when theyâre needed and a sane toolâselection chain. In Cursor I keep seeing toolâcall errors, too few calls and weak decisions. Ask it to research and spit out a detailed .md and the gap versus Groq + Opencode is night and day.
Meanwhile, o3 works great for me and the pricing still feels generous for devs. But with such an incredible openâsource model available, Cursor should invest more time to make it work properly.
deepseek r1 and kimi k2 instruct should run very cheap but consumes 1 request each? what the hellâŚ
they should abandon cursor-small and cursor-fast (which is no one uses anymore) and replace them with kimi k2 instruct, fast and reliable, powerful, agentic.
why is kimi unusable in cursor? it works great on open-router and is cheap AF, shouldnât time out all the time on cursor
Kimi K2 performance is not good in Cursor. It is very slow. I have to wait a long time for even basic requests. Also, there is a quality issue. Sometimes it just outputs words line by line and cannot progress further. Maybe Fireworks needs upgrade whatever they are using to host this model.