Hi @FlamesONE the screenshot shows lots of tokens being used. Average 500k tokens up to 2M tokens which does suggest heavier usage even if smaller files are being handled.
The Tokens feature on top right breaks down the usage by 4 types of tokens which can be matched with AI providers API pricing.
While I have no insight into your other usage details or yesterdays usage, I suggest:
Have a look what can be done with less intensive models or Auto as this greatly reduces your plan consumption
Switch to Sonnet 4 when more complex tasks are needed.
Check how much context is really necessary, as high token amount consumes your usage fast.
Could you share a bit what you are working on (what kind of files, other detalis that could shed light onto why so many tokens are needed)
AI use a tool calls for finding files, so for me more easier way it’s just attach needed files.
Don’t quite understand. Which error? This is error on cursor servers, not mine.
A days ago I used 4 sonnet thinking for editing file with more than 6000 lines, and spent around 20 requests, all was fine. Now something changed and I’m not alone, a lot of people already facing with same issue.
Please ask about it cursor team what they changed and why models eating so much tokens and hit’s limiter much earlier then before.
Im talking about the error you asked AI to fix. Using convenience by not providing bug details to AI costs you a lot of tokens. This is not an AI issue.
Was anything different between 3/4 July? I dont have insight into your chats. Depending on whether you have privacy on also Cursor may not be able to see what happened in those chats.
From what I see there was no such heavy change before / after on my usage. If there is a persistent issue, I would suggest making 1 request with privacy off and posting here the Request ID so the Cursor Team can have a look if there are technical issues that could be improved.
I think problem is in new cursor context works.
Because my project is not small it takes a lot more tokens than before, but in my opinion it works are same as before
Hey @condor
I have a question. I just created a blank project and ran Taskmaster AI to generate rules automatically. But I don’t understand what “cache read” means. Also, I’ve deleted all my old projects from the workspace — but is the cache still there?
It seems like it evaporates after just 5 questions — that’s crazy. I only asked 5 times today using Claude-4-thinking.
Cache Read is part of chat session that is already cached at API provider, this avoids repeated processing as it has been ingested for the session. provider reads from cache and uses it as context for response. Cache Read is cheapest of all the 4 columns.
I do not think this is from previous sessions, could you confirm if there is any relation of new project to old projects and if there is any other project in Cursor which might have used similar code or still has files indexed?