Claude with Anthropic Key

“The model claude-3-opus-20240229 does not work with your current plan or api key.”

Not to be dense here, but does this refer to my Cursor Plan?

Currently “Cursor Pro” and it’s a shiny new Anthropic key.

If it make a difference, when I click confirm on my key, nothing is actually happening…

If I remove the Anthropic key, I can select claude-opus-3 and get a response.

Version: 0.30.0
VSCode Version: 1.86.2
Commit: 86ecaf78edc03c1bce9a26eba0f73c70c3606e10
Date: 2024-03-20T20:17:07.957Z
Electron: 27.2.3
ElectronBuildId: undefined
Chromium: 118.0.5993.159
Node.js: 18.17.1
V8: 11.8.172.18-electron.0
OS: Darwin arm64 23.3.0

1 Like

Could you try again? Just made a change on our end.

Sorry same behavior. Something I need to do first? Restart cursor or attempt update?

Curious to know if the Claude opus jobs submitted by cursor is faster than then jobs submitted by customized api key? Are there any optimization done in the backend as gpt models?

I also had Claude with anthropic api key in some kind of messy state and not working.

What I did:

  1. Removed claude models by clicking the delete icon
  2. Toggled Anthropic api key off and then on => models got added back there
  3. (perhaps optional) For some reason I did not get these working before resetting my custom OpenAI base uri to default. Weirdly got OpenAI server error even if the selected model was from Anthropic.
2 Likes

Looks like that was the path. Thanks for that, is now behaving as expected :muscle:t2:

2 Likes

May I ask how your experience with Claude in place of GPT 4 has been? When Cursor works, it has been great, but my biggest frustration with it so far is that it constantly disregards the codebase context (whether I provide a file, directory, class, or the entire codebase) and refuses to acknowledge that it can access those files. I’m wondering if Claude is more performant in that respect?

fyi Cody for VS Code v1.8.0 release

March 6, 2024

For Cody Pro users, Cody now supports the new Claude 3 models Opus and Sonnet for Chat, Code Editing and Commands.

Claude 3 Sonnet is the faster model, but produces answers with a lower level of intelligence.

Claude 3 Opus (recommended) is the most powerful model, providing the highest quality code output and answer quality.

I tried Opus on openrouter with 40k context (short language reference) it is surprisingly good in understanding unknown new DSL/programming language (Verse from epic games). but in gpt4 api i am capped at 8k context for now (as a n00b tier user who still didn’t spent enough cash to be upgraded to next tier where access to gpt4 32k context will reveal itself… i am dreaming of the day when i’ll finally grind my level up to 128k access :laughing: [ps it’s ridiculous ikr] but maybe one day i’ll be able to compare full “in-context” performance with Claude). Sonnet and Haiku are not as good, Haiku completely misses the point of the provided reference and doesn’t even understand the basic syntax outlined in it. the performance of GPT4 + RAG (uploaded files) is pretty good as well, but Opus nails it with precision (writing some new code based on completely unknown language, using only the in-context reference) while GPT4 does occasional mistakes. I believe the RAG search cannot be as good as full in-context data, when it’s about new concepts or syntax/facts. The cost with 40-45k tokens is 0.6$ per single request though (in Opus).

1 Like

how is it when comes to completely new concepts in the RAG, is it able to find all the correct facts from multiple locations in the indexed documentation and codebase in order to answer a specific question or write code? or it’ll be provided with only a single chunk of information from all the indexed RAG data?

While experimenting with Cursor, I realized that these tools need further improvement, such as

"understand the request →
→ match request to all relevant ‘pages’ in the RAG db [I don’t care how it’ll be implemented :sweat_smile: but somehow this stage must locate ALL relevant pieces of information and also know where it ‘starts’ and where it ‘ends’ correctly, and extract it for next model] →
→ after all pages of info get collected across all indexed sources in the RAG db, pass the request and the joined info to the target model →
→ inspect the response and compare for ‘facts’ with existing data in RAG db [runs locally, sends as much api calls to gpt4/sonnet/whatever, for the verification process] →
→ if the content is not factual or wrong, re-phrase the request to the main model one more time, with augmented info by the findings of errors [same content as was in 1st request with RAG db info, but appended with ‘pay attention to possible mistakes you could make in XYZ and use ABC methods/functions to avoid the pitfall’ instruction], and repeat, until received content from the remote model will be high quality and will pass the local verification “factfulness/correctness”) →
→ PROFIT ! :laughing:

WDYT? isn’t it genius? :star_struck: although will cost quite a bit, in case of Opus/GPT4 combo for all of the above steps. BUT the quality will be outstanding. That thing can earn 150$/hr all by itself, it’ll cover the api costs :joy_cat: :money_with_wings:

1 Like

I saw a post you made about using this for UEFN Verse, Do you have a Twitter would love to DM you a few questions.