Use custom API alongside cursor subscribed api

I’d like to use my own custom API, as a lot of models are finetuned to a specific usecase, but I want to use cursor subscription based models too, and switching them right now is quite troublesome because I need to go to settings and turnoff openApiKey on and off. Perhaps if there is an extra options, not OpenAi Api Key, but a dedicated custom Api, so that when I choose a model that inherently supported by cursor subscription, it won’t use the custom Api, but otherwise use it, will be great

3 Likes

the only quick fix i can offer you is installing a VS Code extension which lets you use a local LLM

Cody allows that, though completions are crappy, Cursor Ai has much better completions, so you can use that.

The performance of Cursor with the big models like sonnet Opus is also much better on Cursor than on Cody, so would suggest just use their feature to hook up Ollama behind the scenes, use that for specific use cases and Cursor AI for the rest (especially chatting with code base)

if you need help setting it up hit me a message, cause posting links might be forbidden ^^

Nah, it’s not about just completions. I want the model to be available through chat and cmd + k. I’m thinking about somehow intercepting the api call, checking the models, and reroute (change the request address) of that call. But it’s a very hacky way… So currently I just toggle the custom openai api on and off whenever I want to switch from/to custom api

2 Likes

I’m wondering if this can help somehow:

2 Likes

Almost. it is very similar to liteLLM. Right now I’m using it (liteLLM) to even customize the cursor prompt (I tried changing the prompt from intelligent programmer to experienced story writer and changing code to story) to make cursor an IDE for story writing. Alas, it’s still need the “custom open ai api” toggle to use it, which it meant toggling off cursor subscription API like composer that need the toggle for custom open ai api off

1 Like

You can use this shortcut.
image

But I agree that the model selection could be improved.

3 Likes

with the Cody way its ALT+L for chat and ALT+K for what you would get with CTRL+K but its not as intuitive

that way you dont even have to toggle the API, but again the quality suffers, so im not using it unless wanna test the local model which is powered by Ollama.

The other way i can think of right now is using a custom plugin to hook up to Local Inference