Any chances of Cursor adding support for the new SOTA Llama 4 with 10m tokens?
Cheers.
Any chances of Cursor adding support for the new SOTA Llama 4 with 10m tokens?
Cheers.
I’ve been trying to use Groq API for the last 15 minutes and Cursor just won’t verify the Llama4 model I added to the list. It keeps using the gemini-2.5-flash-thinking model, which I very clearly disabled in the models list, to test the API. Obviously, this fails because Gemini 2.5 Flash Thinking is not hosted on Groq.
I’m not sure if this is a bug or a mistake I’m making on my part. Please lmk your thoughts.
You can deselect Gemini models from the model list, and it will work fine. Unfortunately, it looks like Cursor doesn’t support external models in agent mode, which is a bit disappointing.
I have deselected all other models, but it still shows up in the model selector under the chat.
We need Agent mode for open models. Id pay a monthly fee for this.
If you can’t wait, you can use the free version of openrouter.
Hey, it looks like this is related to the recent changes on our backend. I’ve already informed our team, and I hope we can resolve it soon.
This Llama 4 doesn’t have much hype for coding live, it’s better to support ds v3.1 as soon as possible.
Maybe I should have been more clear. What I meant was for Cursor to offer it via their own API abstractions (which means as part of the PRO plan), like they do for DeepSeek et al. It’d be nice to have the biggest params’ Llama 3.2/4 available.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.