HELP! UNUSABLE! My Cursor has scriptisis!

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Scripts over scripts under scripts inline scripts non-reusable scripts, echo a simple text scripts, mass edit files with no recourse scripts.

This is a total breaker, nothing works ok, it uses tons of tokens, runs weird things and those are malformed and incomplete, files are changed with no control, management or recourse, and the Agent chat has no understanding of what changed, what was before and after, because it doesn’t hit the context.

THIS IS COMPLETELY AGAINST THE CONCEPT OF AN IDE EDITOR, WHICH CURSOR IS AND HAS STARTED PROMOTED AS SUCH.

Steps to Reproduce

Select Auto, ask for simple tasks

Expected Behavior

Reply with normal text like a normal LLM assistant, edit files precisely and punctually with Accept/Reject control.

Screenshots / Screen Recordings

Operating System

Linux

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.2.43
VSCode Version: 1.105.1
Commit: 32cfbe848b35d9eb320980195985450f244b3030
Date: 2025-12-19T06:06:44.644Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Linux x64 6.8.0-88-generic

For AI issues: which model did you use?

Auto

Does this stop you from using Cursor

Yes - Cursor is unusable

This is absolutely ridiculous, it wants to write a script filled with echo, so it can output a question answer.

Every new question generates a new script. Do you planned this? So it uses more tokens and inflates “lines of code” artificially?

This plague has dropped productivity immensely while increasing usage numbers to ridiculous levels.

Hey @vibe-qa

If you can reproduce this and provide a Request ID with Privacy Mode Disabled – happy to look into it! This is not how Auto is behaving for me.

Thanks @vibe-qa. In both these cases, privacy mode is still enabled. You can disable it (and turn it back on after) under Cursor Settings > General > Privacy.

Do I have to regenerate the examples again?

Yes!

7b71cc38-237b-4859-9883-77581a05579d - working example, 500k tokens, review available (0.25$)
8b1b573d-bdee-49e6-85fd-3f16e35fb955 - off the rails, 1.3M tokens, review missing (0.46$)

A single shell command poisoned the rest of the inference, making the approach unstable, error-prone, wasteful, expensive and unable to rollback or review the changes. This time it appears to have been successful but usually it stumbles scripting more and more until chat compaction happens then it’s never successful.

Hey @vibe-qa

Thanks for the Request ID!

I can see what you’re talking about now. The agent isn’t taking into account the nuance of your prompt and goes full-on scripting.

I’ve filed a bug for this with the team.