Hi, when will GPT-4 Turbo model be added?
Enough time has passed since its release
Hi there!
As of today, there is no GPT-4.5 model available. You might be thinking of GPT-4 Turbo. For more information, you can visit the following link: GPT-4 Turbo for Pro Users.
Hope this clarifies things!
Thanks for correcting me
They turned off turbo after few days, few weeks ago
And it was only for fast requests
They just updated GPT-4 Turbo to reduce laziness! Hopefully this will fix the issues and allow Cursor to use GPT-4 Turbo now
Yes! We’re testing the new model right now for pro users to see if it performs better than all older versions of gpt-4. If so, will roll out.
Do we know approximately when these tests will be concluded?
Don’t want to put a foot in my mouth so unfortunately not sure at the moment. We’re pretty paranoid about model regressions and definitely don’t want to roll out a worse model to pro users (even if it’s cheaper!).
We also haven’t confirmed whether OpenAI can give us high enough rate limits / dedcap access to the model yet.
Should hopefully have a better idea soon.
is this different than adding support for gpt-4-0125-preview to the list of models supported if youre sending calls through your own openAI api key? will that be rolling out at the same time?
Good approach on model regression. My cursor experience has been quite consistent. Only once I had to chat “continue”, and it worked (some days ago).
As an intermediate solution, for those who are antsy to give the new model a try, you can now use gpt-4-0125-preview
as a pro or API key user if you configure the model in settings. Let me know if you run into any issues.
Should this work for pro users as of now? If so, I’ve gone ahead and configured it as per your screenshot above to test it, but still being presented with:
(On a pro plan)
Thanks!
For the time being, this will only work if you still have fast requests left. Does that explain the error in your case?
That absolutely does, thanks!
Promising results so far. Previously on GPT4, when I pasted documentation of about only 2k tokens, it would not follow the examples in the documentation for how to correctly use the newer methods.
Now it is correctly doing that on the new gpt-4-0125-preview
I have the same issue
When you say documentation, are you using internal work docs or are you taking it from the Internet like PyPi etc ?
Achieved excellent results with GPT-4-0125-preview on Cursor so far!
The expanded context window of GPT-4-0125-preview has a significant advantage when incorporating multiple files or the entire codebase in context. The model’s 128k token context is sufficient to handle questions that would otherwise require RAG in GPT-4 (8,192 tokens).
I don’t think this will be financially feasible for them. At base price this would cost them $1.28 (20$ per M input tokens). Even if they have a huge discount, this still would not be financially feasible at 500 requests for only $20.
Docs from the internet. I was simply copy and pasting some information that was only 2k context and before, it would fail to follow the instructions and still use the old formatting for the library I was using.