Has anyone successfully configured Cursor to access the OpenAI LLM using their own API key, while routing the requests through a custom-built proxy?
I think this would be very useful, especially to inspect the full prompt that Cursor sends to the LLM. I managed to get the API call working, but Cursor gets stuck in a loop afterward.
Cursor isn’t just a client-side wrapper. It’s a managed inference layer (composer, chat, etc) that all requests are routed through. Think of it as a hosted agent runtime, not a local LLM client. Using your own API keys would bypass this layer and you’d lose the benefits of all its features.
I’m talking about the built-in Cursor option for bypass. It still uses the Cursor agent and routes through the Cursor servers, so why would that mean I’m losing anything?
Apologies, I misunderstood! I haven’t run into this, but this could happen because Cursor’s agent or composer expects a specific tool-call schema that could breaks when using a personal API key.
Question: does this happen if you switch to ask mode?