Feb 2025 - I manage the API product for Venice. Integration with Cursor has been one of our most heavily requested features. Users are yearning for the ability to have uncensored, private AI models running on Cursor - and they want to utilize Venice API to achieve that.
For this integration to be successful, we need to add 2 things:
- Venice API Key section within the Cursor Sessions > Models section.
Headline: Venice API Key
Text: You can put in your Venice AI API key to access Llama or Qwen Coder Models.
Configuration:
Base URL: https://api.venice.ai/api/v1/chat/completions
Sample Request:
curl --request POST
āurl https://api.venice.ai/api/v1/chat/completions
āheader āAuthorization: Bearer -<Your_API_Key_Hereā
āheader āContent-Type: application/jsonā
ādata ā{
āmodelā: ādolphin-2.9.2-qwen2-72bā,
āmessagesā: [
{
āroleā: āuserā,
ācontentā: āWhat is the capital of France?ā
}
]
}ā
- Add the following models to the model list for selection. These follow the exact text format as āmodelā from the previous API Call
A) qwen32b
B) llama-3.3-70b
C) llama-3.1-405b
D) deepseek-r1-llama-70b
E) deepseek-r1-671b
F) dolphin-2.9.2-qwen2-72b
G) llama-3.2-3b