Gent mode with OpenAI Base URL override sends request without model (model is required)

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Ive deployed Ollama on my LAN server. used Nginx Proxy manager to point it there with my domain. But when i try to use local model i got error.

Steps to Reproduce

Settings → Models-> API Keys → OpenAi Api key:

Models → add custom model with many version suggested by Cursor Agent:

  • qwen3-8b
  • qwen3-8b:latest
  • openai/qwen3-8b:latest

none of them work

my ollama model list:
NAME ID SIZE MODIFIED
openai/qwen3-8b:latest 500a1f067a9f 5.2 GB 46 minutes ago
qwen3-8b:latest 500a1f067a9f 5.2 GB 51 minutes ago
deepseek-coder:6.7b ce298d984115 3.8 GB About an hour ago
deepseek-v4-flash:cloud ea027821675c - 2 hours ago
qwen3.6:27b a50eda8ed977 17 GB 35 hours ago
gemma4:latest c6eb396dbd59 9.6 GB 36 hours ago
qwen3:8b 500a1f067a9f 5.2 GB 36 hours ago
qwen2.5-coder:7b dae161e27b0e 4.7 GB 39 hours ago
qwen2.5-coder:14b 9ec8897f747e 9.0 GB 39 hours ago
jobautomation/OpenEuroLLM-Polish:latest c318cc440788 8.1 GB 39 hours ago
gemma4:e2b 7fbdbf8f5e45 7.2 GB 39 hours ago
gemma4:31b 6316f0629137 19 GB 39 hours ago

Expected Behavior

When OpenAi Api Key and URL overide, I expect Cursor to use my local model in this case. I also got Azure OpenAI setup, and its working - but i want to make it work with local deployment.

Screenshots / Screen Recordings

Operating System

Linux

Version Information

Version: 3.3.12
VSCode Version: 1.105.1
Commit: 75c0dfd29aecf2cc208dbaf761d5cc459c601aa0
Date: 2026-05-06T03:47:52.249Z
Layout: editor
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Linux x64 6.17.0-23-generic

For AI issues: which model did you use?

every model from my local Ollama deply behave the same. for this bug i used qwen3-8b:latest

For AI issues: add Request ID with privacy disabled

4df39f6f-c1f9-4569-a310-57280e4457d2
1ceacd9b-4d92-4d9c-a717-540fdca7b366
375b9e26-3066-4aec-a6b4-cb443e7f4276
5a172980-370e-4b0a-99f6-b2b91cf0e08f

Additional Information

Created by Cursor Agent, after debugign session

Cursor version: 3.3.12
OS: Linux Ubuntu Desktop 24.04
Config:
OpenAI Base URL override: https://api-ollama.studio-colorbox.com/v1
API key: non-empty dummy
Selected model: openai/qwen3-8b:latest (also tested qwen3:8b, qwen3-8b:latest)
Error:
ERROR_OPENAI
Request failed with status code 400: {“error”:{“type”:“client”,“reason”:“invalid_input”,“message”:“model is required”,“retryable”:false}}
Request IDs:
4df39f6f-c1f9-4569-a310-57280e4457d2
1ceacd9b-4d92-4d9c-a717-540fdca7b366
375b9e26-3066-4aec-a6b4-cb443e7f4276
5a172980-370e-4b0a-99f6-b2b91cf0e08f
Evidence:
Backend endpoint works with manual curl:
GET /v1/models => 200
POST /v1/chat/completions with explicit “model” => 200
Problem appears in Agent mode (runAgentLoop stacktrace), despite selected model.
Suggests Cursor request builder is dropping/omitting model for this flow.

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the detailed report with request IDs and logs. That really helped.

This is a known bug on our side. With BYOK via Override OpenAI Base URL, the model field gets lost on the way to the provider, which is why you see 400 "model is required". The fact that your manual curl to the same Ollama endpoint returns 200 when you explicitly include model confirms that Cursor isn’t adding the model field to the request body for this flow. We’re tracking it, but I can’t share an ETA for a fix yet.

There isn’t a clean workaround for Agent mode with local Ollama via Override URL right now. As you noticed, Azure BYOK works for you since it uses a different code path, so you can stick with that for now while we fix the OpenAI-compatible path.

I’ll post an update here once there’s progress on the fix.

thats a good news :slight_smile: ill be waitingfor some news from your side. BTW all of the problems debud Cursor agent itself :wink: