Hey guys!
We know that different LLM models have different context windows within Cursor.
However, now it’s very tricky to see how many token the model is used and when you need to start a fresh chat. It would be great if there were some sort of indications about the context size window somewhere near the bottom of the Cursor interface (maybe something similar to what google uses in Google Ai Studio)
Just created an account to say this would be the perfect feature to implement. Context is King, it’s so foundational to working with LLM’s and to know you are within bounds can save some headache and though you can be proactive to create a new chat you’re giving up your conversational context/it’s understanding of how/what it was trying to solve. Trying to summarize it and make extra files to recap seems paradoxical, as you’re now dedicating your tokens to recapping rather than using them towards the project. Even a simple count of tokens in the context window vs. the current context window size (based on the currently selected model + whether Max mode is on/off) would suffice.