AI modals tend to responce bad when 50% of the Base pricing plan is consumed

I am using cursor’s payed plan for 5 months now, I wanted to share my experience regarding the AI response quality in the Basic payment plan. I’ve noticed a consistent pattern where the AI’s performance significantly degrades once approximately 50% of the monthly allocation is consumed.

Key observations:

  • Response accuracy and relevance decrease noticeably

  • Code suggestions become less context-aware

  • Completion times seem to increase

  • More frequent need to retry queries to get useful responses

This creates several challenges:

  • Reduced productivity in the latter half of the billing cycle

  • Inconsistent development experience throughout the month

  • Need to be overly conservative with AI usage early in the cycle

Suggestion:

Consider implementing a more gradual degradation of service or providing clearer usage metrics so users can better manage their monthly allocation. This would help maintain a more consistent user experience throughout the billing cycle.

1 Like

Hey Sami,

I can assure you that there is no technical reason why AI responses would degrade at 50% usage of your plan allocation. The only change that occurs is when you run out of “fast” premium requests (500/month on Pro plan), you’ll move to the “slow” request pool which might have longer wait times

If you’re consistently seeing degraded performance that early in your cycle, that’s definitely not expected behavior and worth investigating. Could you share some specific examples of the degraded responses you’re seeing? That would help us understand what’s going on

For reference, here’s how the request system works: Cursor – Usage

1 Like