Is there a setting somewhere to allow slow responses? If I try to get a GPT-4 response, it thinks about maybe trying to do something for a few seconds, and then I get a popup about increasing my GPT-4 limits with a button that takes me to the Cursor website. I don’t see a way to select slow responses from the inference model drop-down. I don’t get added to a queue and no reply ever comes in.
sorry this was a bug in the deployment which is now resolved.
1 Like
Thanks. I just had to do one GPT 3.5 reply and then switch back to 4 to get it working after the fix.
please send us failiing request ids if things fail