Claude-3-opus on sourcegraph/cody is $10/month instead of 10 cents per request?

I’m confused, what’s going on?

it’s actually $9/month

Good point!

it is not relative to Cursor

Being overcharged is not relevant?

CodyAI restricts the context to 7k I believe.

Oh, I didn’t know that. And cursor does not restrict the context?

Still could not find what the context for Claude is. However, it could find improvements for a paper I uploaded for different sections, top to bottom. Really impressed, it was around 20k tokens.

Cody is 7k token
Cursor 10k token
Phind 32k token

What are your sources for this? Most companies seem to be secretive on this stuff.

Cody they talk about it in their discord. Cursor is 10k as reported by those using the API (same for pro). Phind is stated on their website when you subscribe.

Only company that have things to hide (like Perplexity) are secretive about it. I guess their context is so small that they don’t want to disclose it.

None of these are openly publishing their context size. I’d say this is still secretive.

Cody - I’m on their discord and can’t see anything.

Cursor - free plan with API is kind of irrelevant because the user is paying. The pro plan… :man_shrugging:t2: are there any logs on that?

Phind. It says ‘up to’ 32000 tokens. That could be model dependent. Maybe 32k for their smaller models only (hence the ‘up to’)

I’d love some solid facts on this.

Fyi Perplexity has a reasonable 30k context length for files with Claude opus.

Source: Reddit - Dive into anything

Then proceed with a post from some random guy on reddit…

Right… For Phind it’s 32k, they even warn you when you reach the limit on the VS Code extension and you can see how many token are discarded as you move over the 32,000k limit (the up to depends on the custom instruction).

For Cursor it’s on this forums.

For Cody they store that “secret” in their doc: Usage and Limits - Sourcegraph docs (7k, not 8k)

I wasn’t saying this was a fact, but FYI there are multiple people on the thread here saying they’re seeing 30k limits. Not random guys - users of Perplexity that are clearly running tests against it to try to understand the context limitations.

Thanks for sharing those details. Interesting to see. Phind is looking like a great option these days. They cover a lot of ground. Going to check them out.

@claidler Context window is not everything. There is other strategy like Cursor I saw somewhere that they will do a recap of the conversation before truncating it to keep the most important info in the context. So far for my use case, cursor is the one that gives me the best result. It’s also the only one that support @Docs which is better than web search if you want something precise and relevant to a specific version. Perplexity is the one who give me the poorest result out of those 4. Codeium is not bad too (I don’t know the context size of that one). But so far Cursor is my favorite by far and Codeium for suggestion on autocomplete. Copilot++ is really bad in Elixir.

Yeah I agree, for the most part I think it’s best to reduce the context being sent to get a focused answer.

I don’t think any option is good at selecting context automatically, so I prefer to select what code is being sent manually.

Perplexity I think is pretty poor with coding - it’s not bad for some things, but threads don’t continue well. I like Cursor, but I’m really enjoying Opus at the moment so getting the best results using Cody and manual selection of files.

Docs never worked well for me in practice with Cursor.

Some of the agent type stuff in Cursor is magical at times. At the end of the day though, I think anything more than - here are the files, here is the question, give me an answer, is just noise.

Until we get something like Devin working that is (still think it’s got some time), but I’m happy where we are. I don’t want to work at such a high level that I’m basically not doing anything.

@debian3 point above being - I think none of them are great with context selection, so as long as I can select context, send messages and get an answer, then context window and model for me is the next logical thing to look at in terms of where to go.

I love Opus. So much so that Cursor is not even a consideration at this point.

If Is still relevant

But Cody auto complete still seems as talking to a not so smart ape if compared to copilot++.

But it can do some nice things when large context are involved with the right model