Having looked into the usage limits, the best I can surmise is that we get fast lane access to Claude and then put into slow-mode after a certain usage.
Right now I have a tiny electron project that I want it take a shot at doing.
I made a specification and am letting Claude knock it out for me.
It’s not going to be fast to begin with, I don’t need instant results and would be fine with yielding priority access in favor of having faster access when I need it to immediately iterate with me.
Frankly I don’t understand the thresholds imposed per payment tier, so I might just be wrong in some conclusions.
If I am, feel free to tell me.
Been trialing out cursor just over the weekend, coming from VS Code Copilot, and I’m impressed with how Claude has been performing in this editor.
If you have good knowledge of programming. Cursor will be very good, if you don’t have knowledge, you will spend thousands of $ and still not be sure to complete your product.
You also see Claude is very powerful in Cursor right? Because Cursor provides better context than Github Copilot → costs more tokens → Claude is smarter. But because Claude is so smart, it will be quite expensive
hi @SamInTheShell thank you for trying Cursor out and welcome to Cursor Forum.
Feel free to share here your insights, pain points and issues.
For insights and minor pain point post them here so we have a good reference.
Ask us any questions on usage and how you could optimize it.
For any bugs please create Bug Report with more info Create Bug Report
Currently we do not have a slower output mode, but I can see how it would be beneficial, though an implementation would need to depend on the model provider as well.
Btw. for Claude models use their name, number and capability like Sonnet 4 or Sonnet 4 Thinking as this helps distinguish different versions.
@DemonVN I know what I’m doing. None of this is a support request, nor are there any issues with cursor.
I’m specifically requesting slow mode to be an option to yield priority for Claude requests for work requests that are low risk and scoped for the AI to iterate on in the background while I just do something else.
As for Claude’s intelligence, it is just a better inference engine. Cursor’s control loop might add some improvement over VS Code Copilot’s implementation, but at the end of the day a lot of the capability is still just the inference engine being better than GPTs.
@condor just a feature request for something that I think would be useful for some of the bulk tasking that I do. Depending on the task at hand, I switch up usage behavior between pair-coding and just bulk tasking after making a design/implementation style doc. The latter is what I decided to try out tonight and led to the idea. When I throw such a task at the AI, I don’t really care to interact with it until it’s done iterating and I do review/changes.
So far Cursor shown enough promise that I’ve already started a paid subscription. I still have to do some testing to see how GPT behaves in this editor. There’s a few Cursor specific features I saw that I plan to validate as well that just don’t exist in VS Code Copilot.
Yes there are similar features like what you request. Users can launch cursor agents in several ways. Depending on your preferences they could run in parallel:
IDE: Cursor AI app with Chat as main UI, however you can also start Background Agents that run on our infrastructure and code independently of your device, later you can connect them back to your IDE and pull in changes or merge a PR in GitHub directly.
You can also open a 2nd tab in Chat and let the Agent there work on a different tasks which does not interrupt your own. Alternatively duplicating the project and opening it in a new window would work as its even less interfering with your tasks on main version.
Web: cursor.com/web Can launch Background Agents from a simple web interface, you can give agent there also follow up requests. (process after completion is same for all BG Agents)
CLI: cursor-agent in terminal performs again independent of your IDE on any project you have checked out.
Slack and similar integrations: Simpler tasks like checking cause of issues or making text changes to a project can be done effectively from slack. It launches as well a Background Agent that performs the task.
For token usage and optimization I recommend checking following: