Cursor's ChatGPT-4o rate/context limited--Web version of GPT-4o not rate/context limited?

Hello. Cursor seems to have rate/context limits (Sonnet 200k or GPT-4o) when analyzing medium to large sized classes.

In contrast, the web OpenAI version of GPT-4o has no such limits. The web version will happily output whatever you specify–and not break up classes into (sometimes between character run-on) chunks.

For me, It’s gotten to the point where I don’t use Cursor for class design…because of these limits. I’ll integrate the final class into Cursor only after the class is finished in the OpenAI web version of ChatGPT-4o.

Is this a known issue? And, is there a strategy for getting Cursor to perform like the web version of ChatGPT-4o?
The promise of Cursor–especially integrating other classes from the codebase into class updates–is compelling. Reality check–I’m still new to Cursor and may be using it incorrectly

1 Like

Yes, you’re right; for some reason, we don’t have the ability to have more than a 20k context window. Even with gpt-4o

that’s unfortunate b/c it puts a stake into med-large sized class dev projects. This effects Cursor, or (probably) any IDE. Not sure why OpenAI would want to hobble a service they charge for, unless they’re releasing their own IDE. Depressing

Where did you read “20k”?
They wrote that it’s limited to “10k” some time ago…

For chat, not sure for inline edits: GPT-4o in Long Context Mode 😉 - #9 by rishabhy

I am facing similiar issues mainly for GPT-4o , seems like the output token limit for GPT-4o is very low on Cursor. I often find myself getting responses in the chat window that outputs 70 lines of code and stops after which I have to click on Continue , but it quite difficlt to work like this if one has code that is long . Should be a easy fix as this does not happen with the other models

1 Like

Hi Vik,

Could you send me your email here (or email me at [email protected]).

I will look into the gpt-4o continue issues.