hi @umerqr
Cursor does not track requests but token and model usage which depends on details reported by AI providers through API.
Your screenshot shows high token consumption.
This may mean
- Very heavy coding usage
- Large context used
Recommendations
- Use thinking models only when there is a clear requirement to use them as they do not have much higher quality output than non-thinking models, but consume much more tokens.
- Check and avoid attaching too much context like files, rules, MCPs,…
- Keep chats focused on single tasks and short.
For more see following, incl. how to optimize your usage: