New model! (gpt-4o)

You can now try out gpt-4o in Cursor! Just reload the editor and select the model in the dropdown by chat/cmdk :slight_smile:

(If you don’t see the model immediately, you can plug in the name in Settings > Models. For the time being, uses of the model will just count as normal gpt-4 requests.)

Should work for API key users too!

5 Likes

Wow, that’s basically as fast as 3.5, ELO +100 on complex question (smarter than Opus and GPT4 Turbo). That makes Opus, GPT4 Turbo and GPT 3.5 obsolete all at once.

@truell20 Since it’s faster and cheaper to run, does it means in will shorten the wait time on slow request during peak usage? (did it basically doubled your inference capacity?)

I’m curious about how the inference capacity works - is there physical/virtual hardware set aside for Cursor, or priority allocation of requests within the much larger GPU pool that OAI presumably has?


It looks a bit like this. You can search for openai dedicated instance.

Will the gpt4-o in Cursor support images in Jupyter Notebook? If it does, I think it would be very interesting.

Would we be able to access gpt-4o using API key?

yep

Anyone else not getting great results from 4o just yet? I know the last time a new model was pushed out it was buggy for about a week or so, so maybe the same thing here.
I’ve had 4o act a bit too overconfident and really jumping to conclusions it shouldn’t have. gpt-4-turbo has been better in my last 24 hours of testing.

3 Likes

I’ve reverted back to opus and gpt-4

– O is clearly faster which is cool, but I’ve found it made some mistakes. Wish I remember which ones lol but it definitely did. One thing I’m struggling with is that the models seem to really have internalized deprecated api usage – I can’t seem to use the system prompt to get them to stop. anyhow have fun coding everyone

It looks like both models are on the same level.