First of all, thank you for your work; cursor has greatly improved my efficiency!
A few days ago, many new models were added, including flash 500k, Opus-200k, and gpt-4o-128k. They were very useful when I was retrieving information from my large codebase. However, today it seems that I can no longer see these models, and manually adding them doesn’t seem to work either. Is it due to some reasons, such as cost? Have you removed them?
Additionally, have you considered adding codestral? It seems to be very good.
If you update to the newest version of Cursor, you can find these models in Long Context Chat. You can turn it on it in Settings. LMK if this is a good replacement.
We’re testing out Codestral. You can use it in Inline Chat (CMD K) if you like by adding the codestral model. I’ll tell you when it’s supported everywhere else.
How does it know where to go look that the model is correct or even exist?what about my api key, should I input it or does it link to my mistral account on its own?
I am running Gemma and Deepcoder locally using LM Studio. Is it possible to add these locally running LLMs? I do not want to share my code with online LLMs. Is there any plans in future?