Update: Claude Support

UPDATE: Pro and API Key users can turn on Claude in Settings > Models.

Old Post: For the time being, if you’re a Pro or Business user, you can add “claude-3-opus” as a custom model in the Settings page, and use 10 fast requests per day for free (unlimited slow, but the delay increases exponentially).

We expect to roll out a more permanent solution (including API key users) very soon.

(This has been posted elsewhere, but wanted to make sure it was visible).

15 Likes

Maybe this thread could be pinned so it doesn’t get buried? Could be very useful if it’s used for Claude Support updates.

1 Like

whoa. this is pretty huge! cool to see other models being implemented.

I have been testing this today against the GPT4 model in cursor (whatever that means behind the hood these days, I lose track).

This is very encouraging. So far I tested it on a couple of issues GPT4 as of late would normally give me very general replies to. Tested one example today on both models, general reply from GPT4 and when swapping to claude it nailed the issue in a very specific way first time.

I think its an awesome move that Cursor is integrating other models. More of that please!

2 Likes

Nice, thx!

@truell20 FYI Im getting quite a few errors today with the new claude model. Was working this morning but I get a connection failed message now with:

request-ID 34516f02-db1e-4a50-be7b-f625b6bd6b98

It happens when you reach your daily limit. Try tomorrow it will works again.

1 Like

Oh ok. What is the daily limit set to? And is there somewhere I can check that as it gets used up?

A more helpful error message would be a +1 here!

@jasondainter daily limit here

1 Like

But it says I can use unlimited slow… and this wasn’t the case I got error after error. Confused. Is there a limit on slow, or unlimited as it says?

2 Likes

Will I be able to use claude-3-sonnet/opus from aws/bedrock @truell20 ?

time to subscribe! :star_struck:
one question though - I have two workspaces, one for main work and one for the hobby project which is completely different (everything is different, languages, frameworks, all files and even the concepts of what’s the goal of work in each workspace), does it confuse the RAG feature you have and reindex every time into the same database on your side (per user) or is it per workspace and it doesn’t reindex and mix info across workspaces? :pray: (+ a question of how much context does sit send for consideration to the remote model from the RAG when looking for unique information not found in the model? in my case it would need to be quite a lot of data, so I wonder how does it handle it?)

1 Like

I have the same question (main work + hobby project). Following for answer.

2nd this, better error message would help, and later queuing should work (did not for me some days ago).