I left Cursor back in April and have spent the past few months experimenting with other AI coding tools. Now I’m seriously considering coming back, but I want to make sure I’m setting myself up right this time.
My Journey with Other Tools:
Augment – Pretty efficient and smart at reading/fetching content from websites, and definitely cost-effective. However, it’s noticeably slow at times, which breaks my flow.
Cline – I loved how simple and user-friendly it is, especially the rule settings and model selection interface. But the costs spiraled out of control fast. I spent over $600 in just 5 days when refactoring a project with Opus 4.1. That was a wake-up call.
Warp AI – Surprisingly convenient! It can easily fetch internet content using curl and read local PDFs (under 10MB) without much hassle. Response times are fast too. But it lacks the manipulation capabilities of VSCode, which makes it less practical for serious development work.
What I’m Looking For:
Bottom line: I want to minimize mental overhead while coding. I don’t want to think about which model to use, worry about context limits, or constantly manage my budget. I just want to focus on building.
The Setup I’ve Been Advised:
Someone recommended I go all-in with:
Ultra plan ($200/month) for the large usage pool and priority queue
Turn on extra premium model usage (set limit to ~$100 extra)
Subscribe to Wispr for voice input
Use Agent mode with Auto + Max enabled by default
This would total around $215-315/month depending on whether I hit the extra usage cap.
My Questions:
Is this setup overkill, or is it actually the best way to achieve “zero mental overhead”?
For those using Ultra + Max + Auto daily, do you actually use the full $400 API value, or is it more than enough?
Is Wispr genuinely worth it for natural language context management (like “walk through the whole project and research X online”), or can I get by with system dictation?
Any alternative configurations that give similar results for less?
I’m willing to invest if it means I can just code without thinking about tooling, but I also don’t want to overpay for features I won’t use.
Would love to hear from anyone who’s gone through a similar decision process. Thanks in advance!
If you pick 2 Models, one for planing (or the new planing feature latest released by the Cursor Team) and one for coding.
I’m using Sonnet 4 without MAX and its great to know that sonnet 4 is limited to 250 lines of code and if you have files with 750 you are free to use MAX (but most of the time you should not go higher then 300 lines from my own personal point of view).
For the code lines count, I guess it would exceed for more than 300 lines, seems sometimes need AI to output code for a plain new proj or feature based on the plan outline, that is why I got suggested to use Cursor with auto + max mode;
I’ve noticed the plan mode just released, but I not sure how it diff with that plan/act modes in Cline, actually, I really want is that: I just saying (by voice input), and the AI auto decide if it should feedback in plan or act modes----curretnly, when using Cline, I alwasy stick to act mode, but if I just need it to plan but not modify anything, I would add “just think, tell, and no act”.
3.I still face the multi-project collaoration feature— for example, for the currently project, it need to refer to anthro project locally or a github repo—seeems for this, currently the best UX I experienced is from Augment, which I just paste the local extra project path or the github repo----I know the @doc feature seems should be ok to solve it, but it seems to be buggy, now.
Can you provide more infos to the bug you mention? If there is an ID + Version you could share in the Bug Reports would be really helpful for the team!
So when do you plan to try out your setup? I guess starting with 1 Month (instead of yearly payment) would make really sense for you?
I’m planning to give that expensive plan a shot once all my current subscriptions run out.
As for working across multiple projects, I’m honestly still figuring out the best approach. Like, should I just drop in local file paths or GitHub repo URLs? Not really sure if Cursor handles that well right now or if there’s a better way to do it.
Oh, and about the @doc feature being buggy - heads up, I got this from PerplexityAI, so take it with a grain of salt. Here’s what they summarized:
Bug #1: AI Unable to Read Indexed Documentation
Reported: September 17, 2025
Forum Thread: #133986
Status: Unresolved
Description: After successfully adding and indexing external documentation, the AI completely ignores the content when answering questions. AI may respond with “docs are empty” or behave as if no documentation was attached.
Affected: All platforms, Cursor 1.5.x - 1.7.x
Bug #2: Documentation Feature “Utterly Broken”
Reported: September 25, 2025
Forum Thread: #134992
Status: Unresolved
Description: Prolific docs user reports that the @Docs feature has completely stopped working across all models. AI falls back to web search instead of using pre-indexed documentation.
Impact: Critical - Core feature non-functional
Bug #3: Agent Cannot Access Indexed Docs
Reported: October 5, 2025
Forum Thread: #136389
Status: Unresolved
Description: Cursor Agent (Composer mode) is unable to read or reference documentation that has been indexed and added to the project, even though regular Chat mode may partially work.
Description: Related issue where @Web functionality attempts to reference documentation but returns no results. Quality inferior to ChatGPT or Perplexity for technical queries.
Note: This compounds the @Docs problems
Bug #5: Deleted Shared Docs Still Appear
Reported: October 8, 2025
Forum Thread: #136777
Status: Confirmed, minor impact
Description: When team members delete shared documentation, it continues to appear in the @Docs autocomplete list, creating confusion and potential for referencing outdated content.
Version: Cursor 1.7.38 (build October 6, 2025)
Platform: Windows 10/11
Workarounds Currently Used:
Manual copy-paste of documentation into chat context
Using @Web with explicit URLs instead of relying on pre-indexed docs
Avoiding @Docs feature entirely until fixes are released
If I work with larger templetes then I’m forking those GitHub into my workspace – but the pain point here is that the AI is limited to the context and I have to source them out to important and not important.
As example I forked a Microsoft Chat-RAG system and it was terrible to let the AI source it out, I have just used it for one time (sharing a screenshot) and give me a checklist until I deleted the not important part, this took me 1-2 hour work until I learned how this repo is actually working.
But tbh you could also let it be done by the LLM with MAX mode, but I’m looking into cost and try to keep them low because I’m working on several customer projects at the same time.
In your case you have to try it out (I recommend github repo fork and source them out) but I would love to read your experience on a later stage.
If your other repos are very important and you have a STRONG understanding how they have to interact to each other then I recommend to write a Cursor/Rules/FrontendorBackend/HowToGuide-files etc.
Thanks for the bug reports! Ser @condor are those bug-fix still in progress?
Try and use Auto for a while, check what of your tasks would not work with Auto well. See if selecting GPT-5 or Sonnet 4.5 would be better for that. Use Thinking models or Max mode only if the tasks you do require it, like a higher context size, more complex reasoning, larger files, large codebase, complex architecture.
Usage depends a lot on project structure, programming language, framework, context management,…
Not sure about voice input.
Try out Plan mode in Agent, new feature available to help Agent plan tasks and execute them with Build afterwards (shows after Plan reply)
I agree with Sabri’s suggestion of finding right model for your planning and coding.
Add 1: let Agent find code, do not attach files unless it really doesn’t work as this would eat up your context faster.
Add 2: Right, Plan mode does work well, depending on your OS you can also do voice input of keystrokes e.g. Voice Control in MacOS, or 3rd party tools.
Add 3: workspace with multi project.
Not sure if @doc is really required as that would reference external documentation indexed in Cursor.
There is temporarily a @docs issue and the team is looking into it.
I can not find some of the threads as the AI gave bug names that are not based on forum post titles.
@web search works for me, as we are not using Perplexity or ChatGPT search features it may not have the same performance, but results depend often on need and availability of results of search.
Deleted shared docs only temporary impact, do you delete docs regularly?
Honestly, I still want to stick with auto mode because, like I mentioned, I’m trying to minimize mental overhead. I also agree with not explicitly attaching context to the AI - both for saving tokens and for getting smarter results from the model.
But here’s what I’m wondering about your suggestion: Is Cursor’s auto mode + max mode combo actually smart enough to handle complex workflows? For example, imagine I’m working on a project where I need the AI to first dig through a massive .log file to identify bugs, then clearly plan out how to fix the code efficiently while explaining everything to me like a senior architect would (maybe this should trigger plan mode automatically?).
To get the best overall efficiency, it seems like this shouldn’t rely on just one model - different parts of the task probably need different context inputs. Can Cursor handle this kind of orchestration intelligently and get closer to true “vibe coding”? Or do I still need to manually manage which mode does what?
(This also reminds me of Roo Code’s Orchestrator mode, but honestly, the way it jumps across different chat sessions makes the whole workflow pretty hard to follow and unreadable, let alone it need complicated setting for each sub-mode.)
Bottom line: what I’m really after is EFFICIENCY - not just quick responses, but high-quality output and fast time-to-solution. And ideally, I shouldn’t have to babysit the agent or worry about which mode to use - I just want to focus on my actual work and let the AI figure out the rest.
Oh, and one more thing: there are still tons of complaints about Cursor’s pricing, like this thread pointing out that the $20 Pro plan can run out in less than a week for heavy users. Actually, one of the reasons I left Cursor back in April was because I felt uncertain about the actual costs despite wanting the best UX(I am not sure if the index and chunk makes the aI feedback dummy , or unrobust across different models). So now I’m willing to pay more (I can budget up to $1000/month for voice-based vibe coding within cursor of avg 6 hours coding(with related discussion) per day) to ease this concern. And sometimes I also want Cursor AI to solve some system issue or give me some insights based on a pdf paper locally or online , which not related to a speicifc workspace.
Anyway, I really hope the current Cursor is efficient enough that I don’t need to worry about all these non-business details anymore - just TAKE MY MONEY AND SAVE MY TIME!
I think I would try with auto+ max mode all the time, since I want as little mental overhead as possible. I’m frustrated at setting here and there when using various AI tools(except for setting rules, which I know is inevitable).
update: @condor , does the auto and max modes conflict in cursor now?
If you want high quality, go with Codex/gpt5 high in Codex CLI. It lost a bit the last days, but still by far the GOAT. $20 with Chatgpt plus gets you far. Additionally consider Claude Code pro for $20. Would not let it code really, but can be great companion to review and make plans for hard bugs.
More value for just $40.
Cursor is not at the same level right now, and much more expensive. It’s only good for stupid quick free models like grok code fast. Maybe that web tool is useful. Plan mode is nothing special IMO, just go “write a plan to .md”.
I keep it as a backup because old plan, but I feel more and more stupid holding onto it.
I’ve heard of these 2 wonderful CLI tools. But frankly, I don’t like these CLI-like tools. Besides, sometimes, they are slow to respond.
Then, I heard that Wispr supports file tagging with a cursor, so I want to give it a try now. That is, if I spend the maximum, but the cursor still doesn’t satisfy me, then I would quit.
You welcome dissenting voices and genuinely listen to user feedback.
Turning criticism into meaningful improvements.
Your high-frequency creativity.
The crystal-clear documentation.
Personally, I am attracted by the creativie feature of the dictation feature along with wispr, but I am not sure the whole quality, which I would figure it out by myself then.