Quick question, regarding the Max Mode, I understand that context window size is ultimately limited by the underlying model? How does Cursor help optimize or extend the context window in this mode?
Also, does Normal Mode have the same maximum token limit as specified by the model (e.g., 120k for some models)?
The Cursor docu shows the token limit in regular and Max mode.
The limits are different for regular and Max mode, as Max goes to models actual token limit.
The actual limit depends on several factors:
complexity of your prompt/task
how many rules are available and which ones are added to context
how many files you add to context directly
how many requests you used in a chat (longer chats means more context used0.
how many tool calls are needed to perform the task
Its not advisable to try to go to the context limit in the first request as each step then adds more context and some will be summarized or omitted. While Cursor does some attempts at ‘managing’ the context while you are getting closed to the limit, its advisable to start new chat when you reach context limit and the chat shows at botton that you should start a new chat.
The closer you go to any models token limit the more mistakes you will find, while some models boast with 1M token limit thats not the ‘effective’ limit but the technical limit, you may get hallucinations way earlier depending on the context especially if some of the context is contradicting itself.