After set custom openai API and do some test, all default models do not response, no matter what models I choose, it shows planning the next step and did nothing. However, the auto mode can work correctly, and the custom model works fine. I tried to disable custom api and api key, remove the config(located at ~/Library/Application\ Supported/Cursor), reinstall cursor, nothing helps.
Steps to Reproduce
add a custom api(use litellm proxy) and api key
add a custom model and disable all other models
test the custom model(it works fine now)
enable some of the default models, try to do the test firstly, all models stuck at planning the next step
Expected Behavior
The default model I choose can work properly
Operating System
MacOS
Current Cursor Version (Menu → About Cursor → Copy)
This looks like a conflict between your custom API setup and the default models. Since Auto mode and custom models work fine, there may be leftover settings affecting default model routing.
Could you please try:
Go to Cursor Settings > Models and make sure the Custom OpenAI Base URL and API key fields are completely empty (not just disabled).
Run network diagnostics: Cursor Settings > Network > Run Diagnostics and share the results.
After removing the custom API settings, start a new chat (not just clear the context).
If the issue persists, share the console output: Help > Toggle Developer Tools > Console tab (filter for errors).
Ah, I toggled developer tools, it seems I have ran out of my usage (Err resource_exhausted), but strangely, it doesn’t pop up a warning window, I remember there should be one. And additional, before I added custom endpoint, my resources are all normal, but when I test the custom endpoint in ask mode, there was a warning told me that I have ran out of my limit.
I checked the dashboard and it doesn’t tell me how many free token left, does cursor change the subscription policy recently?
Anyway, thanks a lot for your help, it really helps.