Currently Claude in Cursor doesn’t accept images unlike claude.ai interface.
For using Opus with long context, you could try this:
Using the Opus model at claude.ai has limits after a long chat too (it says 10 messages remaining until the next day), and copy pasting gets tedious + I found that Claude.ai forgets when the files were updated, so you have to continuously paste updated files and tell it: “this is the current version of the files”. Also the responses get slow after you reach like 100000 tokens or so.
But the slow responses will also be a problem with using Claude API in Cursor + I think you may be limited in the API usage as well, and IIRC you need to wait a month and spend a certain amount in API usage before the limits increase.
I’ve tried Cody paid plan with Opus as well, but didn’t find it very capable, compared to what Cursor or Claude.ai already offer. I guess they are also limiting the context window.
After 10 Claude uses a day, you aren’t limited by the slow mode in Cursor though, you can enable optional billing above that and pay to Cursor additionally for each request, which is going to be like 0.1$ per every additional request.
If you aren’t using your API key and Claude opus in the long-context chat in Cursor then the subscription to Claude.ai may be better, since every long-context chat request in Cursor may cost you around 3$, while with Claude.ai you can probably use it more for only 20$, but you’ll have to copy paste everything yourself, which may not always be bad, since we are not sure how well Cursor is at finding the right context from the codebase when using the long-context chat currently, and it may not always be beneficial to have a lot of context, if the relevant data gets lost in it.
However, after a single attempt I found that long-context chat with Opus probably beats the Normal chat, just because of the sheer amount of data that gets sent, and in a large codebase this may be beneficial, since Claude Opus will see more examples from your project and will be implementing things closer to the practices you use in your project.
I suggest trying everything and finding our for yourself though, because these things are still very unexplored and experimental, so there is no one size fits all solution.