Excessive Prompts & Breaking Features on Simple Tasks

Where does the bug appear (feature/product)?

BugBot

Describe the Bug

I’ve been testing Cursor for over a month and consistently run into issues where even basic functionality requires excessive prompting and still fails. For example, building a simple pop-up form to add, delete, or rename folders has taken more than 8 hours and 10s of prompts across two accounts.

Steps I’ve already tried:

  • Fresh sessions to avoid context buildup
  • Short, medium, and detailed prompt styles
  • Planner mode and third-party tools (e.g., Perplexity) to refine prompts
  • Multiple models (Claude Sonnet, GPT variants, Composer)
  • Iterative breakdown of tasks into smaller steps

Despite this, the model often produces partial solutions that break other features, creating a loop of fixing one issue only to generate new ones. This makes progress on even straightforward MVP features extremely slow.

Has anyone else experienced similar problems with basic features requiring excessive prompts or causing regressions? If so, what workflows or strategies have actually resolved this in practice?

Thanks,
Michel Mitri

Steps to Reproduce

It happen in every prompt or new plan implemented

Expected Behavior

  1. To build an effective plan that work from the first time
  2. To have limited number of fixes if necessary
  3. When impmenting a fix, or new feature, it should not break other features

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.1.19 (system setup)
VSCode Version: 1.105.1
Commit: 39a966b4048ef6b8024b27d4812a50d88de29cc0
Date: 2025-11-21T22:59:02.376Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

All without exception

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. What you’re experiencing is a common challenge with AI-assisted development on complex features, and I can suggest some strategies that have been helpful for other users.

Try these workflow adjustments:

  1. Use Ask Mode first - Before implementing, use Ask Mode (not Agent Mode) to discuss the approach. This prevents premature changes by the AI.
  2. Switch to Plan Mode - For medium complexity tasks, use Plan Mode to create a plan you can review before execution.
  3. Limit scope - Agent Mode works best for simple, focused tasks. Break your pop-up form into smaller pieces (structure first, then styling, then functionality).
  4. Add context explicitly - Use @files to specify exactly which files should be modified, reducing the chance of breaking unrelated code.
  5. Create .cursorrules - Add a .cursorrules or .cursor/rules file in your project root with explicit constraints like “only modify files I explicitly mention” or “ask before making changes to existing functions.”

Can you share more details:

  • Which mode are you using (Agent/Plan/Ask)?
  • Do you have a .cursorrules file configured?
  • What specifically breaks when it generates code (errors in console, visual bugs)?
  • How large is your codebase (number of files)?

This thread has similar experiences with good suggestions: Is there better tool out there than cursor?

Thanks for your kind feedback. I’ll definitely take all of these notes into consideration. I’ve also provided more context in this related post: Many Prompts to solve a simple task! - Discussions - Cursor - Community Forum

To clarify, I’ve already experimented with Ask Mode, Plan Mode, and Agent Mode in different scenarios. I’ve also tested prompt variations (short, detailed, structured) and even used Planner and third-party tools like Perplexity to refine execution. Despite this, I continue to run into issues where very basic features consume hours and dozens of prompts.

Examples:

  • Soft delete/trash system: Every time I attempt to implement soft delete (move file to Trash, then allow user to choose between soft delete or full delete), the application crashes. It took over 2 hours and 12+ prompts, but each fix introduced new regressions.
  • Folder management system: Adding a simple pop-up to add/delete folders took over 9 hours and 20–30 prompts. Each iteration either repeated the same bug or created new ones.
  • Filter/sort menu: Clicking items in the menu unexpectedly changes location. Despite multiple prompts and Planner use, the issue persists.

The solution I’m building (a DAM application) is complex, but these are foundational features. If the base functionality were stable, I could reduce complexity by templating and reusing components rather than re-describing them for every new feature.

The main critical issue is that I’ve consumed my monthly utilization bandwidth just trying to fix these basic features, and I still haven’t even moved on to more complex adjustments like AI face recognition and detections. These features were working in a previous version, but due to extended errors, I had to revert, losing a full week of work and rebuilding from scratch, which is devastating.

To answer your questions directly:

  • Modes used: Ask, Plan, and Agent (depending on task).
  • cursorrules file: Not yet configured, but I see how this could help constrain changes.
  • Breakage: Typically console errors or crashes, sometimes visual bugs, and often regressions in unrelated features.
  • Codebase size: Medium-sized, with multiple modules for file management, UI, and integrations.

I’d appreciate any specific guidance on how to stabilize these foundational features so I can build on them without spending disproportionate time on repetitive fixes