Support for Openrouter.ai

Is it possible to add an Openrouter.ai key by changing a config file somewhere?

From: OpenRouter

Looks like only one change needs to be made
openai.api_base = “https://openrouter.ai/api/v1
Possible?

If not I think this would be a great feature add. I was trying to set up an OpenAI API key and got the dreaded “Card declined” mystery error… after looking around it looks like this is one of the better alternatives.

In the Cursor settings, I see ‘OpenRouter’ mentioned but then there is no field to set an OpenRouter API key.

Hi @0xSparked

You can add these settings to OpenAI by overriding the URL address, just make sure there’s no slash at the end. Then, click the “Add model” button to add the model you need, as shown in the screenshot.

One more thing: it seems there’s an issue with the Claude 3.5 Sonnet model. It wasn’t working via OpenRouter, but it might be fixed now—I haven’t checked yet.

1 Like

Can we assume that “Override OpenAI URL” works for all the models in the picker? Also Anthropic and others?

Not for everyone—those with OpenAI-compatible APIs, like OpenRouter, are on the compatibility list. However, others, like DeepSeek, Groq, or Mistral, may not work in certain situations. For Anthropic, you don’t need to override the URL, it has its own settings in Cursor.

But how to actually configure what goes through OR? And how to configure Anthropic URL? I can’t see this option.

All steps are shown in the screenshot.

3 Likes

@deanrie Have you been able to get Cursor to work with Groq at all?

When I try to use their openai compatible base url Cursor always errors out. (Same url and key work in a direct curl call.)

I’ve tried the base url as well as the ‘chat/completions’ url they list in their docs (both without a trailing /) and both hit an same error. (Oddly the first one mentions an error about trying to call ‘claude-3.5-sonnet’ as the model).

Error with https://api.groq.com/openai/v1 set as base -

Error with https://api.groq.com/openai/v1/chat/completions set as base -

Has anyone been able to get Groq connected successfully?

And now some extra weirdness…

I was able to successfully connect OpenRouter before trying this - and now am getting the same errors when trying to connect it :man_facepalming:

Can’t tell yet what actions or pattern may be causing the errors since this worked earlier

Hi @clayton

Unfortunately, not yet.

1 Like

Try disabling all models that aren’t related to OpenRouter.

1 Like

Ok, I think this is starting to make more sense.

It looks likes even if the ‘verify’ step errors out, the connection can still work - for both OpenRouter (when it fails) and Groq (using justhttps://api.groq.com/openai/v1 as the base url).

So I guess errors can be ignored in the short term? :man_shrugging:

At the moment, if anything is using OpenAI’s connection, I’m turning off all models that aren’t part of the custom connection - hard to tell if that’s always required or not. It’s a bit inconsistent and fussy.

@deanrie if/when there’s a chance to implement updates to the model settings that can group models by provider, it’d be a huge win for cases like this.

Also interesting side note - it looks like Llama 3.2 likes to always include chain of thought and source references in its responses (via Groq or OpenRouter).

1 Like

Have you tried this:

1 Like

I was working on my own Chat response system for Cursor that uses the OpenAI protocol, and my guess why some things aren’t working with some of the providers is because Cursor demands a chunked streaming response and not a basic response. chat.completion.chunk vs chat.completion