Please allow max number of tokens supported by the models

Hi,
Apologies in advance if I’m wrong: but I am under the impression that regardless of the model we use, Cursor is limiting the number of tokens (i.e. context window) to 10k. This is based on various messages that I came across in the forum. My understanding is even when we use our own subscriptions, the limit is there. This takes away the advantages of improvements to models and in some cases it is simply not possible to use the API for the task even though it would support it otherwise.
I’d gladly pay to not to have a limit while using my own subscription.

4 Likes

Can devs be more transparent with token limits for PRO for different models? In many cases, it could be a time saver - at least you could know that the size of the current context is too big to discuss for the current LLM and save your time (and requests) not doing that. As long as it is not obvious and devs don’t talk about limits much (I can’t find it on site, on docs, on FAQ, etc) you start to think it’s pretty small? I’ve heard something about 8k/10k but it could be changed, there are new models, there are 30k contexts from Cody, and is 10k still the case for Cursor? Isn’t it too small to discuss codebase for example? Even a few big scripts? Anyway, modern models have become much cheaper, maybe now the context size is bigger but I don’t even know.

1 Like

I’m not sure if it is possible, but I added a feature request to display number of tokens used in chat as an indicator of whether the chat has got too long etc: