UPDATE: Pro and API Key users can turn on Claude in Settings > Models.
Old Post: For the time being, if you’re a Pro or Business user, you can add “claude-3-opus” as a custom model in the Settings page, and use 10 fast requests per day for free (unlimited slow, but the delay increases exponentially).
We expect to roll out a more permanent solution (including API key users) very soon.
(This has been posted elsewhere, but wanted to make sure it was visible).
I have been testing this today against the GPT4 model in cursor (whatever that means behind the hood these days, I lose track).
This is very encouraging. So far I tested it on a couple of issues GPT4 as of late would normally give me very general replies to. Tested one example today on both models, general reply from GPT4 and when swapping to claude it nailed the issue in a very specific way first time.
I think its an awesome move that Cursor is integrating other models. More of that please!
time to subscribe!
one question though - I have two workspaces, one for main work and one for the hobby project which is completely different (everything is different, languages, frameworks, all files and even the concepts of what’s the goal of work in each workspace), does it confuse the RAG feature you have and reindex every time into the same database on your side (per user) or is it per workspace and it doesn’t reindex and mix info across workspaces? (+ a question of how much context does sit send for consideration to the remote model from the RAG when looking for unique information not found in the model? in my case it would need to be quite a lot of data, so I wonder how does it handle it?)