Check it out, its a template I got working with groq in cursor. lets you run any llm or provider you want. You have to do the hardwork of integrating it but theres 2 working examples in the codebase.
In Cursor Settings, then Models, you have the option to change the base URL for OpenAI, which all other inference endpoints support OpenAI API as their standards.
For instance, if you wanted to use OpenRouter.ai, you would add https://openrouter.ai/api/v1 as the Base URL to be used.
Once you’ve overridden the OpenAI Base URL above it, click where it reads + Add model, add the model card name you want to use from the API you’re now querying. For instance, if you wanted to use DeepSeek V3 Chat Free from OpenRouter, you would add deepseek/deepseek-chat: free and then + Add model again to add it.
It will not refresh the UI. I have to close Cursor Settings and then reopen it to see the new model. In the agent chat, change to the model you’d like to use, and you’re good to go.
no matter what I’m trying it doesn’t work. when I press verify after pasting Groq token it simply loads for a sec than tells nothing, later I just can’t use Kimi k2 or any other model even with URL overrides.