Hey there!
Are we capped at 10k tokens no matter what we do? Either with Cursor Pro or with our own provided API key? Nothing is about 11k in this Table.
This is my usage from Openrouter.
Hey there!
Are we capped at 10k tokens no matter what we do? Either with Cursor Pro or with our own provided API key? Nothing is about 11k in this Table.
This is my usage from Openrouter.
Maybe they should make it cursor pro exclusive. If you want to use more than 10k context with your own key.
The default context limit for new custom models is around there. Assume you’d want complete control over the context limit?
It would be great to have an option not to limit it.
We have special cases sometimes in our Svelte Project and my use case today was generating snippets based on the code style and structure in my codebase (10k+ files) to have them as a reference for later.
I know 10k is like a “safe” limit so people don’t spend a couple of dollars at once. But it would be nice to have this turned off in cases where you want to make use of the 200k context, like on Claude 3: Opus.
aha! found this thread, I had a feeling I saw something about it. In my case it’s absolutely critical to feed the context with 30-40k of info approximately, because the model needs full language reference (Verse) it’s unknown to any current model so with a short context they’ll be guessing a solution/suggestion instead of actually analyzing referense/examples and suggesting correct solutions. But at the same time, how would I tell Cursor what exactly to attach, will it be able to “find all relevant chapters of the folder and include them in context” ? probably not, but at least a larger limit to set, let’s say I’ll set 40k, will allow for more relevant information to slip in, and results will be better.
Yes, we need complete control over the context limit!!!
Quoting only a few files will exceed the 10k token limit, and this limit should be configurable by user settings.
It would be perfect if users knew how many tokens are used for context when using pro subscription models, and if there was an ability to disable context limitations when using their own API keys.
Look at aider or cline. When working with them, I always know how much context is being used in the current dialogue, and this makes work much easier.
In Cursor, I constantly have to think “will I exceed the context limit” if I add too much documentation at the initial design stage.