Please allow max number of tokens supported by the models

Can devs be more transparent with token limits for PRO for different models? In many cases, it could be a time saver - at least you could know that the size of the current context is too big to discuss for the current LLM and save your time (and requests) not doing that. As long as it is not obvious and devs don’t talk about limits much (I can’t find it on site, on docs, on FAQ, etc) you start to think it’s pretty small? I’ve heard something about 8k/10k but it could be changed, there are new models, there are 30k contexts from Cody, and is 10k still the case for Cursor? Isn’t it too small to discuss codebase for example? Even a few big scripts? Anyway, modern models have become much cheaper, maybe now the context size is bigger but I don’t even know.

1 Like