I started using Cursor recently and like the contextual integration. Way better than loads of copy paste.
However to consider this for my full time use, I’d need way more 500 requests a month. I spent hours in code everyday. Why aren’t there tiers for higher number of requests? Practically speaking, if I use 50 requests a day, it’s only going to last 10 days. This isn’t geared towards real world use, but more towards hobbyist use.
I’ve been using cursor a week and thought it was great, but as a power user I blow through 500 completions in two days, I’m considering cancelling my membership as slow completions are totally unusable on Composer. I’m happy going back to Cody for $9. They don’t mention fast/slow requests for that price. Even if they do have a slow mode, its invisible to the user, I just don’t like how Cursor make you see the queue, nor do I like getting a pop up telling me my request wasn’t completed as there re a lot of slow requests already in the queue.
Cursor AI I think you’re being too greedy and I just dont understand why you have to make it painful to the users by showing the queue location, the request is in.
Basically I think $20 for unlimited Completions is enough. Slow me down in the back end but dont rub my face in it, and make slow mode work with composer properly.
Would also like to see more from the pro tier. $20/mo for 400 more requests is pretty rough. I’m not sure if there’s a way to choose to use slow requests before you run out which could be helpful (for me).
That said composer is what ended up pushing me to the limit. My prompt engineering with claude could be much better though. Composer makes it a bit difficult to pick and choose the diff, as far as I can tell. So if it deletes something or makes enough bad changes that I can’t just pick out what I want… it’s easier just to start over.
I’m sure there’s some workflow on my end, but an improved interface here could pay off quite a bit. I’m already at 200 for September, and not much to show for it. I’ve had some really low quality responses lately.
I found Cody to be pretty bad for whatever that’s worth.
gpt-4o-mini is considered a non-premium model, so use that when possible to preserve your ‘fast-premium’ requests
Remember that Pro subscribers currently receive an additional 10 ‘long-chat’ requests per day for 6 different models (i.e a total of 60 per day), as specified when logged in on the https://www.cursor.com/settings page - so ensure you are utilising those requests as well
Just be aware of the dynamics involved in incrementing and decrementing that count, as discussed in this topic:
@pantaleone , sorry to hear you haven’t had a positive experience with the way requests are managed, i also blew through my first 500 requests very quickly when I first started, but now, following the tips above, I seem to make them last the full month.
In regard to the wait indicator, personally I liked that feature when it came out, before that there was no indicator and it felt like the wait was a lot longer, because there was no visual indicator of what was happening.
@short - we must have posted at the same time, hopefully the tips above can assist in your use cases as well.
Very interesting. Thanks for this, super informative. I always though these long context bits were old and the pricing page wasn’t updated considering opus is outdated at this point. I guess I’m a dummy as I thought the 3.5 sonnet was the 200k model given that option wasn’t available in the models page of settings. Any more details on this? I see 4 options:
I’ve been using cursor for quite awhile and never used these.
Additional info, by adding my Anthropic API key, I’m already at 120,000 token usage within a few requests with Composer. Which means I’ll be hitting my 1,000,000 token rate limit very soon.
Look, let’s be frank, we want to use Sonnet 3.5. We dont want to deal with these issues, when coding really large complex projects which I was doing last month, your head is really full of rules, and issues you need to add into code logic… I really dont want to be switching models and messing around. I want to pay a fair fee and get unlimited chat completions for the models I need. www.notdiamond.ai give me 100,000 chat completions daily and that’s free!
There are some features in Cursor I really love, like how bash shows a Run button. Or, how I can add local rules to the Cursor settings and its vision to images. These things make it more than a hobbyist tool, but having to deal with the drama of swapping models and being throttled when I’m just trying to get work done, then I get annoyed.
Does Cursor offer unlimited fast completions ? @litecode Cody has its issues but I just think at $9 its a no-brainer.
Thanks @litecode. can you tell me the price change when incrementing the number of requests to 1000 or 1500. I’m still on the trial period this will also help me decide if I should get my full team on it
Good questions, I think @rishabhy will be the best person to answer them.
My understanding is that for each additional increment of 500 requests, the cost will increment by $20 (the cost of the Pro subscription). So for 1,000 requests the cost would be $40 per month (2 x $20) and for 1,500 requests the cost would be $60 per month (3 x $20).
But please wait until @rishabhy or someone else from the Cursor team confirms that understanding.
I tried clicking on the buttons on the cursor.com/settings page and this is what I see:
If I click on the Upgrade to Business link, I see this:
In the meantime, for reference, here is a related post about business plans and seats:
I purchased a Poe while buying a cursor and wrote a script to forward the chat requests from the cursor to Poe for conversation chatting. Poe has a monthly limit of 1 million points, and each exchange for 3.5 sonnet consumes less than 300 points, so with 20 dollars, I can ask questions over 3,000 times using 3.5 sonnet. At the same time, I can also use Poe for other model applications, such as generating images, etc.
Not compatible. I wrote a FastAPI program in Python that performs a conversion for requests, converting them to the OpenAI interface. I encountered quite a few issues, but in the end, it does work, although there are still some minor problems
@zz948003 are you using Composer PLUS your own model? I understand that you created a proxy, available online, that receives requests and converts them OpenAI interface to call Poe, and then configure it on Cursor? Where did you configure this LLM in Cursor?
Just some clarifications on @litecode’s usage suggestions.
These 10 ‘long-chat’ requests per model / per day aren’t fast requests. The first time I ever used gpt-4o-128k it was a slow request. The topic is about quickly running out of fast requests. Pro subscribers already get unlimited premium requests, so this doesn’t really address the topic at hand.
For what it’s worth, so far after a day of using, I’ve found that these long context models perform worse than claude 3.5 sonnet. Even the sonnet 200k was worse. Perhaps the context given in their implementation is different or just user error.
I’ve been getting queued today at nearly 150. This isnt good enough, I’ve cancelled my subscription. When Cursor do unlimited fast, for under $40 I’ll come back. Sorry.
I don’t think they’ll cater that much to lower sadly, some of us including are doing more than that amount for the fast requests but I guess it all depends on the user
Essentially, it is using a custom API key, but because the name “3.5 sonnet” is used in the cursor, it will automatically use Anthropic, while my API format is actually OpenAI format, so I made a name conversion.
Customized a model name, converting all instances of this model name to “3.5 sonnet.” Previously tested with Composer, but I have been quite busy lately and have not conducted further tests.