Venice AI API Integration

Feb 2025 - I manage the API product for Venice. Integration with Cursor has been one of our most heavily requested features. Users are yearning for the ability to have uncensored, private AI models running on Cursor - and they want to utilize Venice API to achieve that.

For this integration to be successful, we need to add 2 things:

  1. Venice API Key section within the Cursor Sessions > Models section.

Headline: Venice API Key
Text: You can put in your Venice AI API key to access Llama or Qwen Coder Models.
Configuration:
Base URL: https://api.venice.ai/api/v1/chat/completions

Sample Request:
curl --request POST
ā€“url https://api.venice.ai/api/v1/chat/completions
ā€“header ā€˜Authorization: Bearer -<Your_API_Key_Hereā€™
ā€“header ā€˜Content-Type: application/jsonā€™
ā€“data ā€˜{
ā€œmodelā€: ā€œdolphin-2.9.2-qwen2-72bā€,
ā€œmessagesā€: [
{
ā€œroleā€: ā€œuserā€,
ā€œcontentā€: ā€œWhat is the capital of France?ā€
}
]
}ā€™

  1. Add the following models to the model list for selection. These follow the exact text format as ā€œmodelā€ from the previous API Call

A) qwen32b
B) llama-3.3-70b
C) llama-3.1-405b
D) deepseek-r1-llama-70b
E) deepseek-r1-671b
F) dolphin-2.9.2-qwen2-72b
G) llama-3.2-3b

2 Likes