Failed verification of OpenAI API key on custom API endpoint

I’m working on a project where privacy is top priority, so I’m trying to use a private LLM service instead of OpenAI and Claude.

I tried to override the OpenAI base url, and added my OpenAI API key here.

But I keep get ‘(status code 0) TypeError: Failed to fetch’ error, when I verify the API key.

Strangely, when I run the curl script to call the API, I get the results back very nicely. It’s just cursor that cannot get the API return for some reason.

Can this be fixed?

Which service are you trying to connect?

It’s public endpoint made by ngrok, pointing to ollama.

Hi,

Did you find a solution to this problem?

Thanks!

Hey, what error are you getting?

Hi Dan, thanks for replying.

These are the parameters I replaced in the OpenAI API Key configuration:

Model: ollama

URL: https://-----.ngrok-free.app/v1

However, I get the following error:

(status code 0)
TypeError: Failed to fetch

url https://--------.ngrok-free.app/v1/chat/completions -H “Content-Type: application/json” -H “Authorization: Bearer ollama” -d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a test assistant.”
},
{
“role”: “user”,
“content”: “Testing. Just say hi and nothing else.”
}
],
“model”: “deepseek-r1:1.5b”
}’

When I check the ngrok logs, I see this response:

HTTP Requests

16:45:20.677 -04 OPTIONS /v1/chat/completions 403 Forbidden

However, when I run the request via cURL in the CLI, it works perfectly:

{“id”:“chatcmpl-91”,“object”:“chat.completion”,“created”:1738266602,“model”:“deepseek-r1:1.5b”,“system_fingerprint”:“fp_ollama”,“choices”:[{“index”:0,“message”:{“role”:“assistant”,“content”:“\u003cthink\u003e\nAlright, the user wants me to test how they’re interacting with my chat interface.\n\nI should wait for their input before I respond.\n\nMaybe after a few sentences, they can just confirm whether I’m ready or not.\n\nThis way, it keeps the conversation flowing smoothly without any abrupt stops.\n\u003c/think\u003e\n\nGreat! Just tell me anything. I’ll be happy to assist you or talk about something else with you.”},“finish_reason”:“stop”}],“usage”:{“prompt_tokens”:18,“completion_tokens”:85,“total_tokens”:103}}

It seems like the issue might be related to how the request is being handled by ngrok.

Any ideas on what could be causing this?

Good news! After today’s update, I still see the error when trying to verify it, but I can now use the model locally!

Thanks anyway for your help!

1 Like

exactly the same issue. What’s the version of your cursor now?

Hey, the same issue here. I got error message from lm studio server log, which uses the llama.cpp to deploy my local model. The error showed " 2025-02-04 12:29:41 [DEBUG]

Received request: OPTIONS to /v1/chat/completions

2025-02-04 12:29:41 [ERROR]

‘messages’ field is required" when I tried to verify the model

And the same, it works perfectly when I run curl via CLI

1 Like

I have the same issue here.

Same issue

Same issue here