Add Support for Ollama natively

Integrate local Ollama model support in Cursor to enable AI code completion locally, without relying on cloud services.

Why ?

Some user could need to rely on an off-cloud model because they can’t always have access to internet, their internet is slow, or they work in sensitive environments. Additionally, some could be concerned about privacy

What is unique ?

While Cursor’s current AI completions rely on the cloud, integrating Ollama allow users to keep their data local, this could also reduce latency, as code completion is done on the user machine, or a server on their networks.

What would it contain ?

  • Integration with Ollama API: Embed Ollama API within cursor for the user to chose models they prefer.

  • Configuration and requirement: Allow the user to choose local or a home/distant server running Ollama.

Merge with:

It could be merged with current existing AI feature that are cloud based of Cursor, the user could easily switch if needed to the cloud service, it should be seen as an alternative when people can’t or don’t want to use Cloud-based models.

14 Likes