Add an option to add local model in the same machine or lan with just the IP and http

Feature request for product/service

AI Models

Describe the request

Add ability to add a local model in the same machine or lan with just the IP and http.

Current workaround is for to use ngrok to expose the model to the internet. This is not ideal.

Operating System (if it applies)

Windows 10/11
MacOS
Linux

Hey, thanks for the feature request!

Right now, Cursor supports the “Override OpenAI Base URL” option in the model settings, but it has limitations. It requires a publicly accessible HTTPS endpoint. Direct connections to localhost or a LAN IP aren’t supported yet.

Known issues with the current setup:

Your request makes sense for using local models like LM Studio, Ollama, etc. without needing tunneling. The team is aware of requests like this.

For now, the only option is to use ngrok or a similar tunnel, or set up a publicly accessible endpoint on your network.

1 Like
  • All requests go through Cursor servers to build prompts
    This is indeed a problem. We also need a capability to directly connect to the local model without going through a curler, so that requests to the local model without going through a curler would be much faster.