I still find that a one-shot mode (the current Manual mode) is by far the most effective above and beyond any agentic method for the way I use Cursor. All agentic methods are far more expensive as compared to the one-shot, targeted Manual mode for a single change. Due to the simple way agentic methods work, they cannot match the control and specificity of Manual mode, and seeing it removed is quite scary, as Cursor is really the only tool to offer it in such a well-integrated way. When the 1.3 update drops for real, I hope it doesn’t remove Manual mode, otherwise I’ll go find (or make) a replacement solution.
I have tried creating a replacement “Manual” custom mode, but all those custom modes are agentic (and by that, I mean iterative and piecewise, not one-shot), and due to that, it isn’t working as well as Manual mode has been.
Thinking about it, it makes clear sense why Cursor would force people away from the cheaper modes towards the more token-munching modes.
@dimitri-vs thank you for the detailed feedback, there are additional improvements coming that will eventually replace Manual with a better mode. Sorry about causing the issue in the first place.
@decsters01 no retaliation at all, we saw usage of manual mode was low and are planning a replacement thats better. You can use Ask mode or Custom Modes to essentially set a Mode to be like manual.
First, sorry for my comment. Using the API key more today I was able to make calls on several calls using an agent that was not possible before. From what I understand there is no longer manual mode. Because now the agent will be available for most models that support Tool calls, this is wonderful, thank you.
‘Ask’ mode now is completely different v1.3.3 it does tool searches on the project files and does not read the attached files as context (at least it seems that way to me now)
The reason i am sure ‘Manual’ mode usage is low is due to ‘Agent’ mode being the default in cursor installs and most people vibe coding. If i go direct to openai and attach my files or direct via API calls, it will read all my files context, and that is what i want due to the projects i am working on.
Can you just put the orginal ‘Manual’ mode back please? even if hidden, i need to revert, and i appreciate ’ dimitri-vs’ solution though seems drastic as you guys can just re-add it, even hidden requiring to be enabled in config.
Also, because you decided to remove the ‘Manual mode’ in the LLM chat window, and told users on your forum to use the ‘Ask’ mode instead, i tried that as i would normally just use o3-pro max and attach the context i needed in ‘manual’ mode, but now instead the ‘Ask’ mode decided to call alot of tools and churned through $21.60 in just one request due to all the tool calls, and so this is all just unacceptable (would of cost max $2 in ‘manual’ mode), that 1. i upgraded to Pro Plus and get the same amount of API model usage credit as in just Pro plan, and 2. you removed ‘manual’ mode and none of the other modes work the same and has cost so much in just one request which would of never have been the case in the previous ‘manual’ mode.
this is as close as I have managed to get it to the original ‘manual’ mode that was removed. its close, though still not getting the same results as how models had responded pre the v1.3.0 changes
It seems to be working correctly using this custom mode with everything disabled.
I ran several tests, and it appears to behave the same as “Manual Mode”.
I’m able to do things in a “one-shot” manner (providing the right context) — it’s faster and uses significantly fewer tokens than in agent mode, at least for me.
I use cursor for technical documentation writing and need to strictly control the input context, so I rely heavily on manual mode. However, I saw in the update log that manual mode is no longer available. Is this true? How can I achieve precise context control in version 1.3?
My “Custom Mode” does not have the tool switches mentioned in previous posts, and I’m not sure if this is normal (as shown in the figure);
I wrote a prompt (referencing a very short file, ideally, the model would only receive the content of this file to prevent other related files from polluting the context). The prompt instructed the model to completely reply with all received instructions.
However, the test result of the aforementioned instruction was that the model replied that it received not only the cursor rules but also the entire project’s directory file list, system environment information, currently open and visible files in the IDE, the file I referenced, and the user’s prompt.
Tests found that in the custom mode of version 1.3, the directory file list, system environment information, and currently open and visible files in the IDE should not be included, as they can all pollute the prompt and interfere with accurate technical documentation writing.
I understand that these features might be very helpful for programming tasks, but many people also focus on document writing, and not being able to precisely control the composition of the prompt is very troublesome.
PS: Since the newly installed v1.3 overwrote the original v1.2, I haven’t had a chance to test the features of v1.2 yet. But overall, users should be given complete control over the prompt.
In addition, in the above test, my cursor rules, the short test file referenced, and the test prompt were less than ten sentences in total, but one conversation actually occupied 4% of the context space (8,000 tokens out of 20,000), which means custom mode actually added nearly 7.7k of extra information.
This might be efficient for programming tasks, but it’s still not ideal for technical writing tasks.
I hope the Cursor software maintains broader possibilities.
Two years ago, AI auto-programming was a fantasy, but two years later, everyone is doing AI programming.
And now, people are still writing documents in Microsoft Office while generating draft paragraphs in large model chat windows. Soon, perhaps in less than two years, everyone will realize that an editor like Cursor, which integrates modification and display functions, is the best tool for the perfect combination of large model writing and text editing.
Writing documents in Cursor (such as product manuals, scientific papers with many formulas, lab reports—I am currently writing text while using Cursor to write Python programs to generate illustrations for papers—this is a revolutionary experience that will certainly become widespread in the near future) is also the best practice—just like what I’m doing now.
Therefore, at least in the deep settings, users need to be provided with the possibility to completely customize the context and agent behavior for non-programming tasks according to their own needs.