I configured custom OpenAI endpoint with API key and added custom models hosted by my organisation. Adding “kimi-k2.5” is not possible as Cursor says that the model is already available and forces its own deployment. I didn’t even notice I was consuming my API limit until today… Adding “glm-5-fp8” and “minimax-m2.5” worked but then it fails when I send a prompt to it with model name validation error - why does it even validate it?
Steps to Reproduce
I think I described it above.
Expected Behavior
I would expect that this is possible to configure custom models as right now it’s impossible.
Hey, thanks for the report. There are two separate issues here, and we’re aware of both:
kimi-k2.5 gets blocked as already available
Cursor compares your custom model name against the built-in model catalog, and kimi-k2.5 matches our built-in version. As a workaround, try adding the model with a slightly different name, like kimi-k2.5-custom or my-kimi-k2.5. If your endpoint handles that name correctly, or ignores it and uses the default model, this should work.
glm-5-fp8 and minimax-m2.5 show Model name is not valid
This is server-side validation rejecting model names that aren’t in Cursor’s catalog. The same issue is discussed here:
Unfortunately, there isn’t a working workaround for the second issue right now. I’ll pass this to the team. Since the earlier reports, it’s still happening, and your report helps raise the priority.
Let me know if the kimi-k2.5 workaround works for you.
+1, having this issue now with error message:
Model name is not valid: “claude-opus-4-6”
this model name was working a few days ago. I didn’t change anything on my end so it’d probably an issue on the Cursor side. And it’s blocking any work.
Hey. Unfortunately, there’s no working workaround for server-side validation right now. The backend rejects model names that aren’t in the Cursor catalog, even if you’ve set a custom endpoint and API key.
I also can’t share anything specific about the fix timeline. The issue is being tracked, but I don’t have an ETA. This is clearly not a one-off problem, see BYOK, can’t add custom model, OpenRouter models error.
As soon as there’s an update on the fix, I’ll post it here.
I have same issue. I can set up pretty much any arbitrary name, “qwen-3.6 27B” or “local-model” and it will work for a few hours and the start getting “Model name is not valid”. The thing I wish I could communicate to Cursor Dev team is that whatever default model harness is applied for a new, unrecognized name, that uses default Open AI-compatible calls, works pretty well. In my case I have qwen-3.6 27B running on Llama.cpp behind LiteLLM and was getting outstanding results with Cursor… until it breaks. So just let us use the defaults in a “generic model” mode rather than using heuristics or whatever you are doing server-side to try to detect models and adjust around them.