Hey guys!
We know that different LLM models have different context windows within Cursor.
However, now it’s very tricky to see how many token the model is used and when you need to start a fresh chat. It would be great if there were some sort of indications about the context size window somewhere near the bottom of the Cursor interface (maybe something similar to what google uses in Google Ai Studio)