I am currently evaluating Cursor as a senior software developer who uses GitHub CoPilot.
I normally just use AI for coding to check for errors, add comments, create unit tests, and write simple code blocks from a pseudocode. Basically things I am too lazy to write. So I think I may hit the limit immediately in the Pro plan because of these small requests. Does anyone know what to expect in terms of speed when I do hit the limit?
Some of these I don’t even need a powerful model but I suppose you cannot have different models for different kinds of requests and I also want to reduce the times I need to revise my prompt to get a correct answer.
I am interested in the answer to this question. I would also like to toggle premium and slow mode to reserve expensive requests for important tasks. What is the difficulty in implementing this given that the AI can write the code for you?
Usually it’s between 5 and 15 seconds or so. At peak hours it can go up till 30-60 seconds in my experience. In long context mode it doubles the wait time each time. I’ve had it go up to 5 minutes on that one and I just rage quit it and did it with normal context.