How does Claude 3 Opus in Cursor compare with Opus on the Claude.ai webui with context and output size?

Since we’re currently limited to 10 Claude 3 Opus requests a day and slow mode after those 10 will very quickly become a 30min+ wait per response, I was thinking about subscribing to the Claude Opus on Anthropic directly for $20/m to handle anything outside of those first 10 request each day.

How does the context size and output response size compare between the two? Are they basically the same or does the subscription on Anthropic offer more? Since I’m noticing the response sizes from Opus inside Cursor seem extra small and limited compared to using Gpt4, and you keep hearing about how Opus can easily output hundreds of lines of code in one go, yet that never feels to be the case with the Opus in Cursor.

And if they are the same response sizes then how about if we compared with using the Claude Opus api key instead, are people getting these larger context responses only using the api?

What do you guys think is the best alternative for when you use up your Cursor Opus quota?

  1. Buying a Claude.ai subscription?
  2. Switching to your own api key instead until it resets?
  3. Get a Cody subscription as well to evenly split questions with?

Thanks!

Currently Claude in Cursor doesn’t accept images unlike claude.ai interface.
For using Opus with long context, you could try this:

Using the Opus model at claude.ai has limits after a long chat too (it says 10 messages remaining until the next day), and copy pasting gets tedious + I found that Claude.ai forgets when the files were updated, so you have to continuously paste updated files and tell it: “this is the current version of the files”. Also the responses get slow after you reach like 100000 tokens or so.

But the slow responses will also be a problem with using Claude API in Cursor + I think you may be limited in the API usage as well, and IIRC you need to wait a month and spend a certain amount in API usage before the limits increase.

I’ve tried Cody paid plan with Opus as well, but didn’t find it very capable, compared to what Cursor or Claude.ai already offer. I guess they are also limiting the context window.

After 10 Claude uses a day, you aren’t limited by the slow mode in Cursor though, you can enable optional billing above that and pay to Cursor additionally for each request, which is going to be like 0.1$ per every additional request.

If you aren’t using your API key and Claude opus in the long-context chat in Cursor then the subscription to Claude.ai may be better, since every long-context chat request in Cursor may cost you around 3$, while with Claude.ai you can probably use it more for only 20$, but you’ll have to copy paste everything yourself, which may not always be bad, since we are not sure how well Cursor is at finding the right context from the codebase when using the long-context chat currently, and it may not always be beneficial to have a lot of context, if the relevant data gets lost in it.

However, after a single attempt I found that long-context chat with Opus probably beats the Normal chat, just because of the sheer amount of data that gets sent, and in a large codebase this may be beneficial, since Claude Opus will see more examples from your project and will be implementing things closer to the practices you use in your project.

I suggest trying everything and finding our for yourself though, because these things are still very unexplored and experimental, so there is no one size fits all solution.

I’ve noticed that claude opus in cursor will consistently give me back “updated” or “refactored” code that is literally exactly the same as the existing one. I point this out in the next prompt and it often does it again in the next response. Is anyone else having this issue? It’s infuriating.

Yes, I experience the same occasionally

I wonder does it somehow minimize the context size or such in Cursor :thinking:

Just today I was trying to make my ~230 line Python script work and neither Claude Opus nor GPT-4 in Cursor managed to find anything wrong (just suggested adding debugging etc.) - and I tried many times, with existing chat and new blank chat.

I then tried Claude Opus elsewhere and just pasted the script and gave just simple explanation of the problem asking it to debug it.

Straight away came back with Here's the debugged version of your script with the issue fixed: and the script fixed… I wonder why Claude Opus with Cursor couldn’t even find the problem :man_shrugging:

Could be related to temperature and other parameters. Some elsewhere report that using a temperature of 0.3 work s a lot better for tough to debug code. But we can’t set temperature in cursor because it makes for greater product reliability.

1 Like