Congratulations everyone, just saw this during OpenAI’s dev day
Hey, good eye!
Lots of stuff during that keynote has the potential to make Cursor even better. Looking forward to seeing what the Cursor team does with it.
Yep, waiting for 4 turbo to be default
For the curious, the 4th icon shown there is Warp, an AI-powered terminal for Mac. It’s like Cursor for your command line–pretty nice!
i noticed that too! so cool. congrats cursor team!
i’m also curious if there is an ETA on giving cursor access to the new model:
we will have the new model in soon. hopefully today or tomorrow.
megaballer!
Perhaps too early to know, but will this obviate the fast/slow gpt-4 requests distinction? Or change the ‘fast tokens per month’ economics?
Super excited for this!
They invested in cursor, therefore of course they are gonna show their investments
Im SO excited to see this in cursor. The 4x context limit increase is going to be a game changer, I find this is the major blocker when GPT4 can’t help because files are too long.
Where is the best place to be notified when this goes into Cursor?
Source? Interested to learn more about this, that’s awesome…didn’t know that
Thank you!
+1 - i’m ready to rock, how do i know when to update cursor?
Cursor is using GPT-4 8K today, not 32K. It’s been officially stated in this forum.
This was regarding GPT4, not GPT4 turbo. Apples and oranges. GPT4 turbo has the capability of 128,000 tokens so we’re not talking about the difference between 8k and 32k this is a 4x increase if utilised.
And stated in this forum by @sualeh above: “we will have the new model in soon. hopefully today or tomorrow.”
So unless Cursor decide actively not to use any of that 4x increase potential (unlikely) then I would say we’re going to hear about this very soon in the coming days.
But maybe someone from Cursor could put us out of our misery guessing and shed some light on what to expect in terms of context capacity?
a somewhat sad update. cursor needs pretty high rate limits and capacity for the new model. we are actively working to get that capacity but it seems like that will take a bit of time as openai ramps up.
also we are working on benchmarking the new model to make sure instruction following and general “code smarts” work as expected.
but to be clear this is high priority for us and we really care about making sure we always have the best model that is available for our users.