Hello. Cursor seems to have rate/context limits (Sonnet 200k or GPT-4o) when analyzing medium to large sized classes.
In contrast, the web OpenAI version of GPT-4o has no such limits. The web version will happily output whatever you specify–and not break up classes into (sometimes between character run-on) chunks.
For me, It’s gotten to the point where I don’t use Cursor for class design…because of these limits. I’ll integrate the final class into Cursor only after the class is finished in the OpenAI web version of ChatGPT-4o.
Is this a known issue? And, is there a strategy for getting Cursor to perform like the web version of ChatGPT-4o?
The promise of Cursor–especially integrating other classes from the codebase into class updates–is compelling. Reality check–I’m still new to Cursor and may be using it incorrectly
that’s unfortunate b/c it puts a stake into med-large sized class dev projects. This effects Cursor, or (probably) any IDE. Not sure why OpenAI would want to hobble a service they charge for, unless they’re releasing their own IDE. Depressing
I am facing similiar issues mainly for GPT-4o , seems like the output token limit for GPT-4o is very low on Cursor. I often find myself getting responses in the chat window that outputs 70 lines of code and stops after which I have to click on Continue , but it quite difficlt to work like this if one has code that is long . Should be a easy fix as this does not happen with the other models