As far as I understand it, local models must be reachable from the web in order to be usable in Cursor, meaning that everything is routed through Cursor’s servers in any case. Why is this necessary? And what are the server IPs of Cursor that send the requests, so that I can forward a port exclusively for Cursor from a public server to my local Ollama instance?
Hey, thanks for the question.
Yep, that’s correct. BYOK models need to be available via a public URL. This is because of Cursor’s architecture. All requests go through our servers for final prompt construction. That means our backend calls your API endpoint, so it has to be reachable from our side.
About Cursor server IPs, we don’t publish them for this use case, since this isn’t meant for port forwarding to local models. The architecture doesn’t support direct connections to localhost or LAN addresses.
Workarounds:
- Ngrok or a similar tunnel to expose Ollama publicly
- A public endpoint with authentication (API key)
- A reverse proxy with public DNS
I get that this isn’t ideal for local models.
thanks, i just do a port forward on our VPN Server for this port and it works, but i don’t understand why you can’t provide a IP list for the requesting servers (i’ve got 2 diffrent AWS Servers so far). I just want to whitelist them for requests to this portforwardings for better security