Hi @0xgokuz ,
Does this assist?
https://docs.cursor.com/advanced/models#what-context-window-is-used-for-model-x
What context window is used for model X?
In chat, we limit to around 20,000 tokens at the moment (or less if the model does not support that much context). For cmd-K, we limit to around 10,000 tokens, to balance TTFT and quality. Long-context chat uses the model’s maximum context window.