Cursor Cloud Agents API "Auto" model not working

I am trying to use the cursor cloud agent api, from Cloud Agents API | Cursor Docs, and using the model as “auto“ or “Auto,“ and I am getting → Cursor API error (400): {“error”:“Model ‘Auto’ is not available or invalid.”}

I am not sure why this model is listed in the model selection in the webpage; however, it is not working. Anyone having the same issue?

1 Like

Hey @Baris!

Where do you see Auto listed on https://cursor.com/agents?

Would be great if you could share a screenshot. :slight_smile:

Hey @Colin,

I am not using the desktop app; I am working directly with the API. If you want to recreate it you have to use this with Optional parameter model: ‘auto‘. I would also like to add that if you call the /models endpoint, which gives you the available models

Screenshot 2026-02-19 at 16.38.37

as you can see from the models endpoint it returns those models. Addition to those we can also call the /v0/agents endpoint with gemini-3-flash but it is not in the list. Why not we are not able to call with ‘Auto’ which is listed here in the avaliable models next to Gemini-3-Flash in the Cursor webpage:

You can always use /v0/models to get the list of models that you can use with Cloud Agents.

The API docs are a bit confusing referencing “Auto” – what they mean is that you can call POST /v0/agents without a model, and we’ll route it to the most appropriate model. It does not mean that auto can be used as a model key.

Hello Colin,

I don’t think this is true because the /v0/models is returning:

{"models":["composer-1.5","claude-4.6-opus-high-thinking","gpt-5.3-codex-high","gpt-5.2-high"]}%

However, Gemini-3 Flash can also be used as a model parameter inside the payload. This highlights the unreliability of the /models endpoint, which fails to return all models.

Hey again!

Looks like the docs have drifted from the implementation here, and I need to correct my last answer!

  • To use “auto,” you should pass model: "default" in your API request.

  • Omitting the model parameter doesn’t actually use Auto. If you leave model out of the request, the API resolves from your user or team settings, eventually falling back to a fixed (but subject to change) default model.

  • As you noticed, you can use models beyond what GET /v0/models returns. The /v0/models endpoint returns a curated recommended subset, but the API accepts other model keys, as you noticed!

We’ll get the docs updated to cover all of this. Thanks for flagging it.

In web, how can I use Auto?