GPT-4 Turbo for Pro users

When will the GPT-4 Turbo model be available for Pro users? Any details on the timeline and new features would be appreciated.

4 Likes

I’d also like to know if OpenAI has contacted cursor with an approximate timeline on when turbo will be out of preview and be production ready.

1 Like

I’m wondering the same. GPT-4 knowledge is really out of date at this point. Cody is using GPT-4 Turbo preview and so far it seems great.

I’d like to know this as well!

cursor gpt-4 is now 4turbo for pro users!

Update: We reverted back because we were worried that the new model was a regression.

3 Likes

To clear things up, it would be fantastic if we could also view the model.
image

1 Like

GPT-4 Turbo is reallyyyyy bad, i advise you all to use only the traditional GPT-4. Super glad Cursor is keeping the old model for Pro users because GPT-4 Turbo is awful (including the new version)

2 Likes

Mmm. It hallucinate often. Has a context Windows of few line of codes. Given a simple git diff cant write a git commit message that Is even of the ball park. Almost trash cody right now to be honest

I personally have not had any issues with the new model

1 Like

How do you know? Interested in trying out the GPT-4 Turbo model, how can you do it?

it is explained in the last changelog, @Illiyyin.robot123 :
Try out gpt-4-0125-preview by configuring the model under Settings > OpenAI API > Configure Models

BTW, it seems that you may also be able to configure a private model in Configure Models. Anyone tried?

I’m using gpt-4-0125-preview with Cursor and it’s working great!

Update here: the standard gpt-4 is now always gpt-4-0125-preview

2 Likes

Thanks for the update but I’m not sure if it is true that context it is 8k, not 128k tokens?

Context is 10k regardless of model Why are the models on using your own keys so much better? - #6 by arvid220u

There are a few times when we adaptively increase the context limit, but often the trade off with time to first token makes the smaller context limit a much better experience.

Would be neat if we could have some control over when that happens. There are plenty of times where time to first token doesn’t matter to me and I would much rather trade it for a potentially better response.

1 Like

Definitely hear you here - we want to improve context visibility across the board

4 Likes