Cursor v1.3 - Pre-release Discussions

Thank god, because I find most models absolutely useless aside from Claude 4 sonnet.

typing in the terminal when the chat prompts a command to be run switches the active text input to the agent window

I’m using a custom mode which only has codebase search (no tools), just updated to 1.3 and the “apply to” button on code snippets is gone. Is that intentional? I use that frequently. I want to manually apply changes, not have an agent do it for me.

Just checked their documentations and it says “Planning and to-dos are currently not supported for auto mode.”…

So I guess they just decided to remove it for Auto mode, that was literally the ONLY thing allowing it to be able to do some tasks.

How is removing “manual mode” and todo list improvements? This does not make any sense.

2 Likes

Try coding without Cursor for a day, and get back to me, LOL

Is there any feature that in the status bar or something to indicate whether I am using my own API Key or not, because I often switch it using cmd + shift + 0, and don’t know my current state.

Can you guys add feature we could use custom API Key and existing Cursor Pro mode work together without switching on or off. Thank you.

I’m on the latest version and it’s a bit of a nightmare. The “Open in terminal” leads all actions to cease, whereas the previous “Open in background” allowed the agent to continue working. This effectively brings building to a halt. Most operations hang, and the few that don’t all happen within this “infinity” cursor terminal once you push any command to “Open in Terminal”, and it will leave you waiting infinitely long, until it gives you an error that “the tool called did not get a response” when you prod it further. The agents also seem less capable too - and not just on Auto. As a feature suggestion, a rollback button should be available if there’s going to be an update button as well.
Version: 1.3.6 (user setup)
VSCode Version: 1.99.3
Commit: 68b8fe7396ea37d8acdaaaa08ba316ba359a4160
Date: 2025-07-30T18:17:09.810Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26100

What a joke

Yes I ended up switching to ask mode almost every time and deny it agent tools until it works through the problem and comes up with a solution rather than chasing the first shiny object it imagines and starting a wild goose chase. Many hours have been lost. But this seems to be a function of the agent - Claude agents are overconfident and off to the races before the gun is fired, OpenAI are overbearing with their questions and hesitance, Google are depressed and give up at the first sign of trouble. None are governable if asked to follow procedures.

1 Like

They reflect the employees at their respective companies :smiley: