Glm 4.5 for default auto model (stop editing my text cursor team not cool)

The title is self-explanatory so im editing this section…

im nearly positive cursor is shadowbanning my post so im going to use this one since everyone can see it

first of all the devs said specifically, “if this gets enough engagement, we will consider thsi” im not sure what constitutes more engagement, but i wanted to point out a few things on my mind that i cant figure out about cursor

  1. i used openrouter to test new models on cursor and on vs code extensions such as roo code, the day they are available in open router i tried both extensions which had no time to update and roo code works flawlessly everytime, and cursor agent mode is extremely broken with the same models that work perfectly with other extensions agent mode

even worse, using the same api key, cursor says i am rate limited with my own api key model… which makes no sense when i can still use it perfectly fine in the other extension, this means to me that they are doing something really funky on purpose to any other 3rd party model on purpose but the question is “why?”

i know why. im almost positive that cursor is getting special treatment from anthropic, this is not a bad thing necessarily, when 3.7 came out, they instantly had already made cursor work with it perfectly and said they had the model early, also anthropic pulled their models from windsurf for a time which also plays into this theory.

this makes me believe cursor runs their business to prioritize anthropic models to lower their cost and make their investors or whatever happy is my guess, and on purspose, even though they say they dont, i simply dont beleive them, they are nerfing other models

i have copy and pasted code from outside cursor, then used identical prompts inside cursor and gotten worse results. this in my testing is direct proof that cursor is doing this. the only models this doesnt happen for is (shocker) anthropic

now i have absolutely no issue with partnerships, in fact i think it made cursor amazing when we had 500 request and unlimited slowmode, that was peak cursor in my mind in some ways i want to go back to that. but as soon as 3.7 came out at the SAME api cost, all of the sudden everything got changed for no reason.

there is no reason in my mind that they couldnt have kept cursor identical to how it was with the new models, made claude thinking more request per use, AND THIS IS KEY, just make us use more request for the longer our conversation is so that you keep the pricing the same, push users to have shorter conversations, and let all users have full access to the full capabilites of the new models.

this is my fix, by going back to request based pricing, and context based request multiplier, then revert the pricing changes and bring back unlimited slow mode and make auto mode better with new a much cheaper models, remove max mode entirely, that is the biggest scam ever.

now the reason they wont do this despite it being upvoted so much is simple. they simply cant. their business partnerships, need to make more money is keeping them from using a simpler, cheaper solution which would absolutely work.

why let users use good models for free instead of having your rich users pay for tons usage based pricing and just rack up the cash that way? you make no money giving good models for free, but you would make an amazing product, which is what made cursor blow up in the first place.

i have more ideas, but frankly im sick of posting things, then seeing my post are the only ones the devs ignore, its actually so hilarious i can see all the latest post and mine are the only ones skipped over, i dont care, i want to make cursor better, but if you want to make your product ■■■■ and ignore me and everyone else just go ahead im done trying to change your mind.

94 Likes

would be nice if was added - GLM-4.5: Reasoning, Coding, and Agentic Abililties

22 Likes

Hey, thanks for the feature request. If it gets enough votes, we’ll consider it.

16 Likes

premium auto

4 Likes

What would be nice is if Kimi-K2, Qwen3-Coder and GLM 4.5 quickly get good integration in cursor.

And let us allow selecting them specifically instead of blocking every model on rate limit.

These models are pretty cheap. I think this will give Cursor ‘unlimited usage’ type of look again

26 Likes

I would greatly appreciate it if you could add this as soon as possible. Thank you very much.

3 Likes

Absolutely, just tried adding it as a custom model via open router. Brilliant at a fraction of 4 Sonnet’s price, and pretty much just as good.

2 Likes

Is everything working as it should? It seems that the latest update may have caused some issues. Whether this was intentional or not, this is a significant concern.

1 Like

Thank you all for model suggestions. Please share more of your experience and this post with others so we can see if this is widely requested as each model requires preparation on our side.

If you notice issues with those models we would appreciate this as well.

2 Likes

Hello,

Some people who’ve tried it say that GLM 4.5 is much, much better than Claude Sonnet 4 Opus. I wonder when GLM 4.5 will be released? Because you can get a better AI than Claude in coding for 1/10 the price. Isn’t that amazing?

2 Likes

The GLM 4.5 model should definitely be integrated into Cursor, as it delivers performance nearly on par with Sonnet4 and is significantly more affordable. Providing generous usage limits for this model in Cursor’s subscription would offer substantial value to users. When the Cursor team introduces this model, I kindly ask that its performance not be throttled just because it’s inexpensive;it’s already efficient and budget-friendly.

Additionally, I haven’t been getting the expected performance when using the Kimi model in K2 mode. There are also issues with message continuity: after typing three characters, the cursor jumps to the next line, and outputs can reach up to 500 lines. This negatively impacts both readability and overall user experience.

7 Likes

I’ve added the feature request, it would be good to see who has tried it via API in Cursor and how it performed. Both positive and negative reports about your experiences help.

1 Like

The model does a great job of analyzing the code and making accurate conclusions, but there is an ongoing issue with the editing process. After each edit, you need to restart the chat in order to continue. The code itself is correct.

1 Like

i want to be sincere with you guys at cursor. Time is money, the faster you implement these models, the better.

just run a internal benchmark test against the current auto mode with the proposed new model, if glm 4.5 or 4.5 air, whatever you guys think is best, performs better then the current auto IMMEDIATELY update it.

i think the team of cursor is heavily underestimating the importance of small incremental improvements. you opt for these big updates, but when these big updates fall flat, or you make too many changes at once, it causes more problems then it solves

why not just release a 1.4 update, saying “we made auto mode 2x better” you dont even need to say you put glm as the auto mode. just say you improved it and come out saying you listened to the compliants and are trying to make cursor better since the pricing changes

listen… i saw you deleted my post where i gave you guys many suggestions, and then saw my suggestions speedily implemented within the next week. I know you guys are reading my comments because this happens with every idea ive posted here.

heres a thought, if you like my ideas so much, and they resonate with everyone else here. just hire me, and ill make cursor way better. seriously. you dont even need to pay me. i literally just want to come on the team, and improve your service and marketing thats it. my email is [email protected] ill wait for your guys response thank you.

Thank you for your feedback, we are definitely listening. There are incremental updates almost daily in several areas, not just the app. Not all are equally visible or talked about :sweat_smile:

Have a look at our website and for roles we are hiring right now.

3 Likes

thanks, but im actually not looking to get hired. i do way better in my current job than anything cursor can offer me.

i just want to get on a call with you guys to vastly improve your product, the reason is that i use cursor and i want to make it way better for free and i want to understand why you cant simply implement a lot of my ideas overnight to make cursor way better.

if you could reach out to my email and set up a call with some of the cursor engineers that would be great, even a short 15min call i could just shoot back and forth with you guys and that 15min could be worth millions of dollars if my ideas would work. just consider it. thanks

2 Likes

Yes, I ran into that. I’m not sure if this is an issue with custom models missing some system prompt/integration that Cursor does for the natively provided models, or an issue with GLM 4.5 itself. I hope it’s the former as the model performance is amazing from what I’ve seen so far, like another deepseek moment but with Claude’s pricing. Feels like 95% of Sonnet for ~20% of the price.

1 Like

I tested and tried it. GLM is much, much better than Sonnet 4 opus. And at 1/10 the price. Why hasn’t it been added to the cursor yet? Normally, an AI would be added the day it was released. GLM has been days and still hasn’t been added.

Wow, what a nice gesture of you to charitably give those 15 min of your time to save the Cursor team MILLIONS of dollars, I’m sure they will be thrilled to have you on the phone, you seem like such a relatable and humble person!

2 Likes

For those who tried both, how does glm 4.5 fair with Qwen3-Coder. These two are the new talk in town..