Your post a year ago says it was 10K then:
So in the time since you raised the limit to ~64K, then to 120K for 3.7 and Gemini 2.5.
Model context lengths are rapidly increasing with per token costs plummeting - especially on cached input. I think most users expect Cursor to continue to share some of those improvements for regular Pro use as you did last year. Regardless of whether a MAX-style option to pay for using the full capabilities of the model is also offered.
If you retain a fixed maximum context length the effect is similar to having frozen tax brackets and high inflation; more and more usage is pushed to MAX over time as expectations for ‘normal’ usage shift with models getting better and productively using more context.
Is that the plan, or will you continue to increase context lengths roughly in line with capability improvements and lower per-token costs from providers as previously?