I believe Sonnet 3.5 Context is maxed to 250K, does this mean that it may not be using our entire code as a context?
Curious how does Cursor work on the context cap
I believe Sonnet 3.5 Context is maxed to 250K, does this mean that it may not be using our entire code as a context?
Curious how does Cursor work on the context cap
Hi @0xgokuz ,
Does this assist?
https://docs.cursor.com/advanced/models#what-context-window-is-used-for-model-x
What context window is used for model X?
In chat, we limit to around 20,000 tokens at the moment (or less if the model does not support that much context). For cmd-K, we limit to around 10,000 tokens, to balance TTFT and quality. Long-context chat uses the model’s maximum context window.
Is there a way to increase this context window? That’s so small no?
Hi @0xgokuz ,
The only information I have on this topic is within that quote above.
There is no way, that I know of, to increase the context window to more than that specified above.
Hmm, is this context window sufficient for most of you guys?
I do notice that they’re “more stupid” than if I were to use Claude.ai directly
Hi @0xgokuz ,
The Cursor devs are naturally driven to optimize where possible and get the best possible outcomes.
They have also acknowledged users’ desire for larger context windows:
I’m sure when they can make it happen they will make it happen.
I personally find it interesting to read or listen to their thinking processes on all aspects of their approach to developing Cursor.
As Lex Fridman stated before their recent interview, it is ‘bigger than just about Cursor’.
Here is one of the clips from the interview, where they discuss different aspects and considerations of performance, including context length:
Title: Scaling laws for AI: Bigger is better
Author: Lex Clips
Duration: 9 mins
Context and token utilisation are also discussed here and here.
If you haven’t seen it, the full interview is here:
I highly recommend it to get a sense of the team’s character, capabilities, ambitions and approach.