Sonnet 4 / GPT-5 usage period is too short, Auto model is highly problematic

Hello,

I actively use Cursor but I’m facing a serious issue. Sonnet 4 and GPT-5, the powerful models, are only available for about 4–5 days. After that, for the remaining 25 days, we are forced to use the Auto model.

Unfortunately, the Auto model causes major problems:

  • The code quality is very poor, it fails even on simple tasks.

  • It sometimes deletes or breaks project files.

  • Outputs are inconsistent and disrupt the entire workflow.

As a result, we are not getting the full value for what we pay, because once the models we actually want to use expire, the fallback model is almost unusable.

Instead of leaving users with such a restricted and unstable model, either the Auto model needs to be stabilized or the Sonnet 4 / GPT-5 usage period should be extended.

On top of this, I also don’t understand why increasing context length is so difficult and expensive. On the ChatGPT app we get “unlimited” use without APIs, but once we move to the API everything suddenly becomes costly. Why is there such a big gap?

If anyone else is experiencing the same problem, please share your support so the Cursor team might take action to improve this situation.

1 Like

hi @matrix_code thank you for the detailed post. Here are a few remarks from my experience:

  • Please check if you can optimize your Sonnet 4 and GPT-5 usage by reducing context size and optimizing Agent runs as this will give you more usage. Check out Understanding LLM Token Usage if this helps you with best practices.
  • We are improving Auto mode by adding better models and improving handling. Feel free to file bug reports with issues you find with Auto.

As to why there is a different usage to apps like ChatGPT:

  • Context matters a lot and increasing context means that more unrelated code, comments etc is presented to AI for processing. This confuses models and increases mistakes by AI.
  • AI providers do have usage optimizations in their own apps like Claude chat and ChatGPT but they also have hourly limits after which they downgrade to smaller models which is in Cursor the users choice.

Additionally, note that the cost is essentially set by the AI providers and since powerful LLMs are very large and must run on large machines with huge graphic cards the cost for those is much higher than smaller models. See the price difference between Sonnet and Opus where gap is 5 times in cost.