I feel like I have to post this type of question every few months.
Dont get me wrong, I really love cursor, but I would really love it that little bit more if there was some kind of transparency on what a selected model is actually using. Theres so much conflicting information in the forums and feel it just wouldnt be that hard to keep a doc/page updated with a bit of clarity (or a sticky in the forums), would surely save a lot of support time on Cursors side. Or a one line explained in the actual drop down?
Am I alone here?
Anyhow my question:
I was told before when I posted this that using “gpt-4” in cursor will use the latest “best” version of turbo, eg as of today that would be gpt-4-turbo-2024-04-09
But I see now gpt-4-turbo-2024-04-09 is added as a seperate model now also.
What is the difference then in selecting “gpt-4-turbo-2024-04-09” or “gpt-4”?
It has been reported to be more lazy so ‘Rules for AI’ etc. have to be changed to compensate for that if you’d expect full outputs of code blocks or files. Also it does not report to be better at every benchmark such as Human-Eval, which is a Python benchmark. Analogous to the ‘2 steps forward, 1 step back’ saying, it’s smart to still keep older models available.
I agree with changing the new model name to ‘GPT-4’ and the current model that is marked as that to the ‘01-25’ naming scheme (assuming this is currently being used).
Thanks but doesn’t really answer my question, and understand different models have different quirks, but what does “gpt-4” actually use I select this today in cursor? This is very unclear. My understanding is if it follows open Ai then this should actually be gpt-4-turbo-2024-04-09
I really don’t get why customers have to jump through hoops to find this out every time a new model comes out. Its not anything confidential / proprietary, its just bad UX!
I guess [gpt-4] option is based on gpt-4-1106-preview model. [gpt-4-turbo-2024-04-09] model is smarter and faster.
In programming, the latest model may not always be the best, so I think the policy is to display it as an option but not automatically update.
Yes agree that sounds logical, but equally in programming there is normally a document of some kind explaining what a given version is for.
Why does this have to be constant guesswork? We can’t assume that the term “gpt-4” in cursor will always match what Open AI points to, and no doubt there will be a lag when new models come out since cursor will sprinkle their magic dust on top of these models (I assume).
Can’t it just be made explicit? I find it very frustrating as a paying customer just not being told what I’m using.
Do you have the pro subscription? Just read your last reply, so you are. Thus I cause the latest version update only lets pro users use “gpt-4-turbo-2024-04-09”…?
I ask cause I do not, I just use my own openai api key. Yet I did not get the new “gpt-4-turbo-2024-04-09” model option. I had to manually add it. Then when prompting it, it says its knowledge cutoff is September 2021.
So I don’t even know if it is using “gpt-4-turbo-2024-04-09” (according to openai’s website, you can also just use “gpt-4-turbo” and it will point to “gpt-4-turbo-2024-04-09”).
I just dont see how models being self aware or not has anything to do with my original question.
and @tito, I use paid cursor without the API key and I see gpt-4-turbo-2024-04-09 in the latest version, you probably need to upgrade/restart. Its in the settings to toggle access to it.
I was replying to @tito comment. Seems like on this forum it’s not that obvious by the look of it, it just add the reply to the current thread without any mention of who you are replying to. Added a quote to my previous answer.
As for your original question, I don’t have the answer. It’s mysterious. From what I understood is they use 2 different model depending if it’s Cmd+K or Chat when you select it. (GPT-4 for Cmd+K as it’s less lazy, and GPT-4 Turbo in the chat). But exactly which version and the details I’m not aware.
Can someone from the cursor team clarify this. And perhaps why this is such a mystery. If something can be done about it I would be mucho grateful. A simple pinned post kept updated would totally do the job!
hi hi hi! sorry for the super slow replies. all gpt-4 on our side points to the latest model:
the custom model was added for api-key users who maybe be pointing to older versions. if you are a paying customer, you are definitely on the newest model.
fwiw, our policy is to upgrade our model to the newest gpt-4 version whenever it is released and we can migrate our dedicated capacity to the newest model. there might be a delay of a week or so (sometimes more if openai is slower) for when the model is announced and all capacity is migrated to the newest gpt-4 model.