If not I think this would be a great feature add. I was trying to set up an OpenAI API key and got the dreaded “Card declined” mystery error… after looking around it looks like this is one of the better alternatives.
You can add these settings to OpenAI by overriding the URL address, just make sure there’s no slash at the end. Then, click the “Add model” button to add the model you need, as shown in the screenshot.
One more thing: it seems there’s an issue with the Claude 3.5 Sonnet model. It wasn’t working via OpenRouter, but it might be fixed now—I haven’t checked yet.
Not for everyone—those with OpenAI-compatible APIs, like OpenRouter, are on the compatibility list. However, others, like DeepSeek, Groq, or Mistral, may not work in certain situations. For Anthropic, you don’t need to override the URL, it has its own settings in Cursor.
@deanrie Have you been able to get Cursor to work with Groq at all?
When I try to use their openai compatible base url Cursor always errors out. (Same url and key work in a direct curl call.)
I’ve tried the base url as well as the ‘chat/completions’ url they list in their docs (both without a trailing /) and both hit an same error. (Oddly the first one mentions an error about trying to call ‘claude-3.5-sonnet’ as the model).
Error with https://api.groq.com/openai/v1 set as base -
It looks likes even if the ‘verify’ step errors out, the connection can still work - for both OpenRouter (when it fails) and Groq (using justhttps://api.groq.com/openai/v1 as the base url).
So I guess errors can be ignored in the short term?
At the moment, if anything is using OpenAI’s connection, I’m turning off all models that aren’t part of the custom connection - hard to tell if that’s always required or not. It’s a bit inconsistent and fussy.
@deanrie if/when there’s a chance to implement updates to the model settings that can group models by provider, it’d be a huge win for cases like this.
Also interesting side note - it looks like Llama 3.2 likes to always include chain of thought and source references in its responses (via Groq or OpenRouter).
I was working on my own Chat response system for Cursor that uses the OpenAI protocol, and my guess why some things aren’t working with some of the providers is because Cursor demands a chunked streaming response and not a basic response. chat.completion.chunk vs chat.completion