Cursor is blocking all custom models

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I configured custom OpenAI endpoint with API key and added custom models hosted by my organisation. Adding “kimi-k2.5” is not possible as Cursor says that the model is already available and forces its own deployment. I didn’t even notice I was consuming my API limit until today… Adding “glm-5-fp8” and “minimax-m2.5” worked but then it fails when I send a prompt to it with model name validation error - why does it even validate it?

Steps to Reproduce

I think I described it above.

Expected Behavior

I would expect that this is possible to configure custom models as right now it’s impossible.

Screenshots / Screen Recordings

Operating System

MacOS

Version Information

Version: 2.6.21 (Universal)
VSCode Version: 1.105.1
Commit: fea2f546c979a0a4ad1deab23552a43568807590
Date: 2026-03-21T22:09:10.098Z
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Darwin arm64 25.3.0

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the report. There are two separate issues here, and we’re aware of both:

  1. kimi-k2.5 gets blocked as already available

Cursor compares your custom model name against the built-in model catalog, and kimi-k2.5 matches our built-in version. As a workaround, try adding the model with a slightly different name, like kimi-k2.5-custom or my-kimi-k2.5. If your endpoint handles that name correctly, or ignores it and uses the default model, this should work.

  1. glm-5-fp8 and minimax-m2.5 show Model name is not valid

This is server-side validation rejecting model names that aren’t in Cursor’s catalog. The same issue is discussed here:

Unfortunately, there isn’t a working workaround for the second issue right now. I’ll pass this to the team. Since the earlier reports, it’s still happening, and your report helps raise the priority.

Let me know if the kimi-k2.5 workaround works for you.

+1, having this issue now with error message:
Model name is not valid: “claude-opus-4-6”

this model name was working a few days ago. I didn’t change anything on my end so it’d probably an issue on the Cursor side. And it’s blocking any work.

I echo the same issue!

I echo the same issue! Having same error message with “Model name is not valid: “claude-opus-4-6” “

I tried the suggested workaround for the first issue, but I’m now hitting the second issue: server-side validation.

This completely breaks support for BYOK providers, and there are no other workarounds available. Do you have an estimated timeline for a fix?

@deanrie Are there any other workarounds? Do you have an estimated timeline for a fix?

Hey. Unfortunately, there’s no working workaround for server-side validation right now. The backend rejects model names that aren’t in the Cursor catalog, even if you’ve set a custom endpoint and API key.

I also can’t share anything specific about the fix timeline. The issue is being tracked, but I don’t have an ETA. This is clearly not a one-off problem, see BYOK, can’t add custom model, OpenRouter models error.

As soon as there’s an update on the fix, I’ll post it here.

I echo the same issue! Having same error message with "Model name is not valid: “gemma4:latest”

I have same issue. I can set up pretty much any arbitrary name, “qwen-3.6 27B” or “local-model” and it will work for a few hours and the start getting “Model name is not valid”. The thing I wish I could communicate to Cursor Dev team is that whatever default model harness is applied for a new, unrecognized name, that uses default Open AI-compatible calls, works pretty well. In my case I have qwen-3.6 27B running on Llama.cpp behind LiteLLM and was getting outstanding results with Cursor… until it breaks. So just let us use the defaults in a “generic model” mode rather than using heuristics or whatever you are doing server-side to try to detect models and adjust around them.