The token provided by cursor seems insufficient? (Not MAX model)

According to a lot of netizens on the Internet, the cursor context token is only 100,000. I wonder if it is really that few. 100,000 can only do very small things. For example, Claude 4 Sonnet can normally use 1 million tokens or even 2 million tokens. If the cursor only has 100,000 tokens available, it means that he understands only 10~20% of the context of others.

But in fact, we seem to have no way of knowing the number of tokens used by the cursor in each conversation? I have a 3500-line HTML, and I tried to let the cursor Claude 4 Sonnet understand it and perform several different tasks, but unfortunately not once succeeded.

My colleague used augmentation code and Rovo Dev and Claude code, and gave it to him for testing. I saw that the processing methods were the same, but it would fail after the cursor was running. I think it should be because of insufficient tokens that lead to insufficient understanding? I would like to ask if anyone often encounters this situation?