"Urgent issue with the private API."

Describe the Bug

Hello, good morning. I simply can’t use my API key within the course, even though it’s compatible. When I click on “Verify API,” it doesn’t verify. I’ve also noticed that several users can’t use their own OpenAI API key with the course, let alone one from OpenRouter. I’d like to know the estimated time to resolve this issue, as I want to reduce the load on my responses by using a custom API.

Steps to Reproduce

Trying to use the Chutes AI API key.

Expected Behavior

It should work because it used to work before, but it suddenly stopped working out of nowhere.

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.4.2 (user setup)
VSCode Version: 1.99.3
Commit: 07aa3b4519da4feab4761c58da3eeedd253a1670
Date: 2025-08-06T19:23:39.081Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26200

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

hi @decsters01 and thank you for the bug report.

Could you clarify which error message do you receive in Cursor?

This has been happening for months, I thought the problem was the provider but it really isn’t, it’s the cursor itself, see the links below without resolution.

And the same error you click to check the API and it doesn’t change from off to on

Thank you for the links, the error message is not clear.

Could you clarify which error message you receive so we can check the issue?

Good morning, my friend. It simply doesn’t show any error message—it’s very simple: when I paste my direct link from Chutes AI or from Open Holter, I also enter my IPI key and try to verify my API key so I can turn it on and start using other models, but the system just runs a little and then stops, you know? It’s the same problem as with the other links. Please give it some attention.

Thank you, I have notified the team about this issue.

Added it to the bug report.

@decsters01 could you please go to Help → Toggle Developer Tools and check the failing request in the Network tab? It will be a call to a /chat/completions endpoint. Please let me know the error you receive there.

{“detail”:“model not found: gpt-5”}

: “system”, content: “You are a helpful assistant.”},…],…}

  1. max_tokens: 10

  2. messages: [{role: “system”, content: “You are a helpful assistant.”},…]

    1. 0: {role: “system”, content: “You are a helpful assistant.”}

      1. content: “You are a helpful assistant.”

      2. role: “system”

    2. 1: {role: “user”, content: “Test prompt using gpt-3.5-turbo”}

      1. content: “Test prompt using gpt-3.5-turbo”

      2. role: “user”

  3. model: “gpt-5”

  4. stream: false

  5. temperature: 1

    https://llm.chutes.ai/v1/chat/completions

Now I finally understand exactly how to solve the problem. This issue had been bothering me for months, honestly. It comes down to doing the following:
First, you need to deactivate all active models and leave only the single model that’s present within the π you’re trying to access.
After that, you make the call, and the normal value returns.
I can’t believe that was the solution all along. I thought the problem was on your end by default, but in reality, it was between the chair and the computer—in other words, me.

1 Like

I think there are two parts. It’s good to know that adding the model is helping to validate and that makes sense since we validate access to a model.

On other side, I have passed your error to the team which should help them figure out what to improve in app.

Thank you kindly for testing this further.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.