Title says it all. I’m just wondering what’s the point of buying more fast requests when you can use your own OpenAI API key? There is no way to toggle between using requests and using my own API key, so I can’t even test this.
In general, using fast requests with Cursor is cheaper than using your own API key.
Also, we can’t support features that require custom models on your API key. Copilot++ is an example of those features.
w00t, so Copilot++ functionality would not work if I provide a localhost:1234 API URL to my local fine-tuned model which runs in vLLM/LM Studio? I was hoping it is possible. The generic models are not suitable for all use cases, what if someone works with a DSL that’s not in the training datasets of those gpt4/gpt3.5 models?