GLM 4.7 seems like a practical and cost-effective model—please add it to Cursor’s officially supported model list.
Benchmark Performance. More detailed comparisons of GLM-4.7 with other models GPT-5, GPT-5.1-High, Claude Sonnet 4.5, Gemini 3.0 Pro, DeepSeek-V3.2, Kimi K2 Thinking, on 17 benchmarks (including 8 reasoning, 5 coding, and 3 agents benchmarks) can be seen in the below table.
Actually , GLM’s model didn’t work well in reality , I am a Chinese , I bought GLM 4.5 last month . The capabilities of this model have been exaggerated.
Thanks for sharing your real-world experience! GLM 4.5 seems like a pretty mediocre model—it’s cheap and just okay performance-wise. But after two versions, GLM 4.7 now costs ($0.40/M input tokens, $1.50/M output tokens), which isn’t really more expensive than 4.5 ($0.35/M input tokens, $1.55/M output tokens), and it performs way better. I’d say it’s worth using.
Their model is based on the MOE architecture, which is why it is so cheap. What I mean is, their model may appear very impressive on those programming leaderboards, but in actual use, it has its flaws. In contrast, Claude Sonnet or GPT 5 are much more versatile.
Yes both GLM and Minimax seem pretty decent for the price. I think it depends how well they will behave within Cursor’s ecosystem and that determines whether they get added. Staff mentioned this a lot in Kimi threads. I hope they are added, but honestly I would appreciate more bug fixes first with stability improvements.
Lately i think cursor team is ignoring the open soruce models , no news on GLM , minimax. And supprot 20+ modal of openai which are too confusing and cannot be used for their speed.
Would be greate if we get some opensource cost effective modal back in the cursor.
GLM 4.6V does accept images and is able to analyze them (kinda slow though)
4.5 Air doesnt
{“error”:{“type”:“provider”,“reason”:“provider_error”,“message”:“Provider returned 400”,“retryable”:false,“provider”:{“status”:400,“body”:“{“error”:{“code”:“1210”,“message”:“Invalid API parameter, please check the documentation.”}}”}}}
So you can use the Vision models without MCPs right in there
so its not using some internal cursor tool but correclty its own vision capabilties (or lack there off depending on the model)