Set Custom OPEN AI API Key

Hi team,

Our organisation has hosted Open AI models behind an end point. I have entered the open AI key along with Open AI base url.

When I try to hit the “VERIFY” button, I get an error.

The testing curl command that was generated is shared below

curl -H “Content-Type: application/json” -H "Authorization: Bearer " -d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a test assistant.”
},
{
“role”: “user”,
“content”: “Testing. Just say hi and nothing else.”
}
],
“model”: “default”
}’

I kind of understand why this is happening. We need to change “default” under model to something else. How do I change “default” to something else?

Do you have any OpenAI model enabled in the settings?

I had unchecked all open ai models in settings.

You must have at least one OpenAI model checked in order for the verification to work.

Hi,

Let me explain a bit more.

Here is the code shared below for illustration.

from openai import OpenAI

client = OpenAI(
  base_url = "https://integrate.api.nvidia.com/v1",
  api_key = "$API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC"
)

completion = client.chat.completions.create(
  model="meta/llama-3.3-70b-instruct",
  messages=[{"role":"user","content":""}],
  temperature=0.2,
  top_p=0.7,
  max_tokens=1024,
  stream=True
)

How do I add this model into Cursor AI?

AFAIK you can do the following:

  1. Click + Add model and add meta/llama-3.3-70b-instruct.
  2. Enable Override OpenAI Base URL (when using key) , enter your base URL: https://integrate.api.nvidia.com/v1, and click Save.
  3. Paste your API key and click Verify →.

Note: As of Sat, Mar 8, adding new models is not possible.

Hi,

I tried doing the above but now when the sample curl command is issued it takes in gpt-4o instead of meta llama model.

It looks like this issue is related to the default model error that you have shared above.