Hi!
I’m using the GLM-4.7 model with the OpenAI endpoint and API key replaced. I understand this is some hack, but you promised support for custom models.
First, why can’t I use other models: Claude, Gemini, Composer, Grok, Kimi, etc., if I changed just the endpoint for OpenAI, it’s ok if ChatGPT isn’t available in this case, but not other models. This is really strange, as much as the impossibility to add any model I want, maybe OpenRouter, or Qwen, or anything else.
But today the situation has become much worse. I can’t use a custom model anyway at all: I got an error “Invalid model. The model GLM-4.7 does not work with your current plan or api key.”
I have a Pro Plus subscription. And now I’m hardly considering changing the IDE because of these issues. I have no clue why do you limit paying customers to use only built-in models, just to resell tokens?
Steps to Reproduce
Use a custom model with API key and endpoint entered in OpenAI requisits
Operating System
Windows 10/11
Current Cursor Version (Menu → About Cursor → Copy)
This is a known issue. When “Override OpenAI Base URL” is enabled, it affects all API keys and models, including Cursor’s built-in models (Claude, Gemini, etc.). The team is working on a fix, but for now here’s a workaround:
Turn off “Override OpenAI Base URL” when you want to use Cursor’s standard models
Turn it back on only when you need GLM-4.7
Switch it manually depending on which model you’re using
Hi! Thank you for the answer.
But we have two issues here. The first one is when other models are overwritten, the second is “Invalid model. The model GLM-4.7 does not work with your current plan or api key”. The second issue makes Cursor totally unusable with an external model.
Also if follow walkaround for the first issue - each time when I need to enable custom model, I have to input endpoint manualy again - it doesn’t save
Yes, the second issue (the endpoint not being saved when you toggle the switch) is also a known bug. The team is aware and is working on fixing the whole base URL override system.
Unfortunately, the current workaround is to manually enter the endpoint every time. The only alternative is to save the endpoint in a note or text file and copy it in when you turn the toggle on.
I know that’s inconvenient, which is why we’re planning to add the option to set a separate base URL for each custom model. That should fix both issues.
I have this same issue. GLM-4.7 works with my Ultra subscription, but changing the switch is pain in the ass. Cursor should use the GLM-4.7 to replace the lousy Composer1 by the way…
This is an issue on your side, GLM should work unless you’re on free plan (BYOK doesn’t work on free plan) or unless you pasted the key/endpoint wrongly
As for endopint pasting each time - disable the “OpenAI API key” toggle to use cursor models, not the “Override…” toggle, that way your endpoint stays saved.
Also facing the same issue today. Oddly enough, GLM-4.7 was working just fine until this morning. Then I restarted cursor for an update and now it doesn’t work. I’m on a pro plan. Cursor Version: 2.4.22 (Universal)
Request ID: 5600d4cf-f01b-4d5b-8861-5eefe3107f76
AI Model Not Found Model name is not valid: “GLM-4.7”
F4t: AI Model Not Found Model name is not valid: “GLM-4.7”
at Gmf (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:9095:38263)
at Hmf (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:9095:37251)
at rpf (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:9096:4395)
at fva.run (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:9096:8170)
at async Hyt.runAgentLoop (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:34196:57047)
at async Zpc.streamFromAgentBackend (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:34245:7695)
at async Zpc.getAgentStreamResponse (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:34245:8436)
at async FTe.submitChatMaybeAbortCurrent (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:9170:14575)
at async Ei (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:32994:3808)
I had the same problem at an older version like 2.4.28. Now I upgrade to 2.4.31 and get a new error message “Free plans can only use Auto. Switch to Auto or upgrade plans to continue.“. Is this still a bug or I need to upgrade my plan to use custom models?
Don’t bother, I have subscription and openrouter API key is still broken in Version: 2.4.31
I ended up installing Roo Code extension to use my openrouter key . Also started using Antigravity. It’s still on preview so it has higher limits for free plan.