Why the overly aggressive model pruning?

Does anyone know why cursor constantly seems to prune the model options? I always seem to just be punished for updating my app since it typically removes my favorite model to use.

I always had a better experience with 5.1 than with 5.2 which just seems to be a degraded model that includes a relatively significant price hike but this was recently removed. The only 5.1 option left is codex which is overly concise with its responses to the point of being unusable for non code edit tasks like discussions or git commits. I had the same issue with gpt 5 being removed in place of gpt 5.1.

Also they never add cheaper open source models like glm 4.7 or minimax m2 so the only options left are auto (which is massively inconsistent and has been “patched” so asking what model it is now only says “auto”), and composer-1, which tends to suck. Composer-1 is fast which is good for rapid prototyping but the cost is excessive for how many times I need it to redo even relatively simple tasks to get a high quality output. If it could one-shot properly it’d be fine but as is I have found its cheaper to just pay way more for opus 4.5 since its one shot is cheaper than 10x composer-1 runs.

My guess is that they want to force users into using composer-1 by removing anything priced similarly that still has good performance and decided to aggressively prune the direct competition model options.

This is extremely disappointing and the attempt to force me to use their own model is simply making the entire app unusable for me for anything outside of using opus 4.5 and repeatedly spamming auto requests until it does a decent response finally which is a god awful user experience.

I was already considering moving off as soon as my annual subscription ended because of the continued refusal to add open source models, but now there is really very little reason to keep it as my primary editor outside of the $20 free usage included with a subscription. Going into on demand usage used to be fine but is no longer worth it for the output I can get compared to say claude code or pay per token competition with this currently provided model selection.

I love the cursor tooling (and would prefer to stay to avoid learning new editors and porting my setup somewhere else) but it seems to all be getting thrown out the window to try and force anti-competitive model options and it just sucks.

I understand you want people to use your model to lower costs, but if people aren’t using it maybe there is a good reason for that? I wish that it was usable with how fast it is, and as soon as it gets there it’ll be my primary but for now you’re just nuking your own app.

I know you don’t care as I’ve seen countless other posts on it. But PLEASE ADD BETTER CHEAP MODEL OPTIONS and stop overly aggressively pruning the model list to try and force everyone to use composer-1.

At first it seemed cursor was mostly a marketing gimmick way of getting tons of subsidized ai usage (during the flat 500 request days/month) but over time it actually evolved into an amazing tool that I was very excited about and figured I’d stick with forever. Unfortunately it now it seems that essentially anti-competitive practices in something as simple as the model selection are already completely destroying the beauty of the product for me.

In an ideal world this changes the mind of someone at cursor and they return some better cheap model options (I wouldn’t even care about a slightly markup on cheaper/open source models, I just want to be able to use them). But realistically I figure this post probably isn’t going to change anything.

I’m interested to hear if anyone else has had a similar experience. Sorry for the rant but even losing gpt 5.1 now is driving me nuts. Happy coding all.

1 Like

Yep, I use gpt 5.1 for cost as well.

Is there a way to add it back in via a custom model perhaps?

2 Likes

I would hope it is possible but if they removed it my guess is they don’t want us to be able to add it back? I would also be interested to hear if someone found a way to do this however.

1 Like

I’ve never tried, but, can’t you put an openrouter key in, and use those models? there’s free ones in there usually, and all the gpt’s

I heard that essentially breaks the normal cursor usage so always avoided using it. Have you tried it out?

In settings you can choose models (I think 5.1 is still available there.

You can also add custom models

I suspect that the Cursor team has to do some kind of work per each model to make sure it works properly with all of Cursor’s features, so that might be why they have been pruning the model list. Sucks to lose a model that works well for you though :upside_down_face:

Add something to your user rules telling it to be more detailed and verbose with responses. 5.1 Codex Mini is there to save some costs too, think it still works pretty well if the task isn’t too complex. I’m hopeful Codex models will be around for a long time since it’s made for coding.

I’ve been using Codex pretty heavily for the past couple months and overall really impressed with it. Though starting out I did notice the same feature (or issue depending on your perspective) where it has very concise responses. I typically like this as it conserves tokens and minimizes the amount of scrolling through junk I have to do in the chat window. But sometimes I do have to tell it to elaborate please.

1 Like