Even 500 requests for $20 are low, and with the current pricing system, we can’t even see what’s what. Bye bye cursor welcome claude CLI
Cursor acts as a middleman, and I don’t get why you’re stating the obvious that Claude will have better pricing for its models, but you’ll be stuck with Anthropic. I personally can’t count the number of times I’ve been stuck in a dead-end chatting with a model, only to switch to another model from a different provider and find the solution within minutes. With new industry-wide releases almost weekly and constantly changing leaderboards, that flexibility is the beauty of Cursor. Cursor is also fantastic about updating its model list within hours of a new release (for example, Grok 4), so you can try all the cutting-edge models with zero data retention and find one that works best for you for the problem at hand. Of course, this benefit comes at a premium. If Claude CLI works for you, great, but i’ll stick with Cursor Business plan for this flexibility.
Cant agree more. It’s incredibly powerful mid chat “Hey Gemini can you review this and see if you agree with the conclusions”. Or “Hey Opus, give this another check please”. I’ve had 4 LLM’s arguing amongst themselves. Its super powerful. It would be even cooler if they could understand they were different LLM’s instead of thinking the previous context was theirs.
”Hey Cursor, get Sonnet, Gemini, and GPT-4 to agree on the problem and tell me what it is”.
How did you do that? Make different models argue in one place? Is this a hidden feature?
do a prompt, change models, ask it to review its work - repeat
Claude Code $20 how many requests in 5 hours? Not enough for me to use. Cursor allows me to use more than 40 requests in 5 hours, it is much better. To deal with the rate limit that Anthropic applies to Cursor, I will use AI-interaction with a small trick that helps 1 request to chat hundreds of messages while the number of tokens consumed in that request is still only equal to a simple request (about 17 thousand tokens = 0.07$)