Is there a practical difference between -low -std -high GPT-5?

I tested just a couple of different prompts and didn’t see much difference, so I decided to use only GPT-5-High.

Have you noticed any noticeable difference in result quality or in cost between the GPT-5 options?

@condor, maybe you can answer?

High and low are reasoning differences. That means it may use low reasoning effort (less tokens), standard reasoning effort, and high reasoning effort (more tokens).

Note that the high reasoning effort will consume more tokens. Use it only if a task can not be solved by the standard model.

1 Like

I’d recommend that we add model descriptions for the models to avoid confusions. It’ll help the user identify whats the right model for them and contribute to better LLM education within the cursor app.

It’s just a few lines of text like we already have on some models, but not all models. Can we please add them? do I need to create a feature request for it?

hi @valentinoPereira a feature request would be great!

1 Like

There is already an existing feature request: Show Model Description in Settings -> Models I’ve linked my comment there.

Thanks in advance!

1 Like

-low, -med and -high have the same cost, but since high reasoning can take much longer, it can thus cost more.

What I’m wondering is how much success did people had with -std or even -low, over -high? Since it seems to me they get noticeably worst at most coding tasks except the simplest low context refactors and stuff like that.

Also, in Cursor, what is the difference between GPT-5 and GPT-5-Medium? I’d think they are the same thing?

I do sometimes use -low when there is not enough information available for AI to reason as it would just start making too many (wrong) assumptions with gpt-5 or gpt-5-high.

Yes medium is gpt-5

1 Like

When recalculating all expenses for the output token, High is cheaper than the rest. But I have a serious feeling that High is suffering from overthinking.