How slow is the slow premium response?

I am currently evaluating Cursor as a senior software developer who uses GitHub CoPilot.

I normally just use AI for coding to check for errors, add comments, create unit tests, and write simple code blocks from a pseudocode. Basically things I am too lazy to write. So I think I may hit the limit immediately in the Pro plan because of these small requests. Does anyone know what to expect in terms of speed when I do hit the limit?

Some of these I don’t even need a powerful model but I suppose you cannot have different models for different kinds of requests and I also want to reduce the times I need to revise my prompt to get a correct answer.

Thank you in advance

I am interested in the answer to this question. I would also like to toggle premium and slow mode to reserve expensive requests for important tasks. What is the difficulty in implementing this given that the AI can write the code for you?

Usually it’s between 5 and 15 seconds or so. At peak hours it can go up till 30-60 seconds in my experience. In long context mode it doubles the wait time each time. I’ve had it go up to 5 minutes on that one and I just rage quit it and did it with normal context.

If I use my own API when the responses are slow, can I still use other premium features?

Composer basically stops working with 3.5 Sonnet in slow mode.

I received a popup today saying there were too many slow requests ahead of me in the queue my, my request wasn’t completed.

Thinking of going back to Cody. I think $20 is way enough for unlimited completions.

1 Like

@pantaleone You seem to be advertising Cody in almost every comment. May I know if you are affiliated with Cody or simply love it that much?