The Model problem

First of all, thank you for your work; cursor has greatly improved my efficiency!

A few days ago, many new models were added, including flash 500k, Opus-200k, and gpt-4o-128k. They were very useful when I was retrieving information from my large codebase. However, today it seems that I can no longer see these models, and manually adding them doesn’t seem to work either. Is it due to some reasons, such as cost? Have you removed them?

Additionally, have you considered adding codestral? It seems to be very good.


I’m glad you like it!

If you update to the newest version of Cursor, you can find these models in Long Context Chat. You can turn it on it in Settings. LMK if this is a good replacement.

We’re testing out Codestral. You can use it in Inline Chat (CMD K) if you like by adding the codestral model. I’ll tell you when it’s supported everywhere else.

1 Like

How do you add those models,
I’ve updated my cursor but only see those models

You guys should really include in depth steps with pictures on how to do this and add new models.
The documentation is vague.

1 Like

You just need to click on the plus sign to add a model, enter the model name, and it will be done.

How does it know where to go look that the model is correct or even exist?what about my api key, should I input it or does it link to my mistral account on its own?

You don’t need to do anything else, all API data is stored in Cursor.

No, it does not add it actually. if you try to open the AI sidebar and select codestral you get an error.

It’s strange, but it seems to work for me.

I am running Gemma and Deepcoder locally using LM Studio. Is it possible to add these locally running LLMs? I do not want to share my code with online LLMs. Is there any plans in future?

As far as I know, not yet, but you can use this: Ollama / LM Studio Support - #2 by deanrie