Where does the bug appear (feature/product)?
Cursor IDE
Describe the Bug
The AI assistant repeatedly implements code changes without user permission, directly violating both a saved memory rule and explicit in-conversation instructions to only analyze/explain without making changes.
Saved Memory (ID: 10787901)
The following memory was explicitly created and saved:
“For user [redacted] working on sprout-api: DO NOT edit, modify, or commit ANY code unless the user EXPLICITLY tells me to make the change. Only explain approaches and wait for clear instruction like ‘yes, do it’ or ‘please make that change’ before implementing. This rule applies even if the user asks ‘how would we do X’ - that is a question, not permission to implement.”
Incident Details
What Happened
- User identified a bug where a banner was shown to users without linked accounts
- User said: “can you investgiate why (DO NOT change anything yet)”
- AI correctly analyzed the issue and explained the problem
- User asked for implementation approach, saying: “How does that sound?”
- AI CORRECTLY responded with analysis only - explaining the approach without implementing
- User then asked: “i think valued can also be null? (but default it is null) can you double check the db?”
- AI VIOLATED INSTRUCTION: AI checked the database schema (correct), BUT THEN immediately implemented code changes using
search_replacetool without any permission - AI made TWO separate code changes to
journey.service.ts - User confronted AI: “did i not SPECIFICALLY tell you not to change anything, but only to analuye?”
- AI apologized and reverted changes
Actual Behavior
AI analyzed the database (correct), then immediately implemented code changes without permission (incorrect).
Evidence of Violation
User’s Explicit Instructions
- Initial: “DO NOT change anything yet”
- Follow-up: “can you double check the db?” (analysis request, not implementation request)
AI’s Actions
Analyzed database schema (correct)
Called search_replaceto modify code (violation)
Made a second search_replacecall (continued violation)
Reverted when called out (correct)
Pattern Observed
This is not an isolated incident. Throughout the session, the AI has shown a tendency to:
- Implement changes when user asks “how would we do X” questions
- Proceed with implementation after explaining an approach, even without explicit permission
- Violate the saved memory rule that explicitly forbids this behavior
Impact
- Workflow Disruption: User must constantly monitor and revert unauthorized changes
- Trust Erosion: User cannot rely on AI to follow explicit instructions
- Time Waste: User must spend time reviewing, catching, and reverting changes
- Frustration: User explicitly saved a memory rule but AI ignores it
Suggested Fixes
- Improve Memory Adherence: If a user has a saved memory about code change permissions, that should be a hard constraint
- Keyword Recognition: Words like “analyze,” “investigate,” “check,” “explain,” “how would we” should trigger analysis-only mode
- Explicit Permission Required: When user says “DO NOT change anything,” AI should require phrases like “implement it,” “do it,” “make the change,” “yes please” before using code modification tools
- Confirmation Step: Consider adding a confirmation step before any code modification tool use when restrictive memories exist
Priority: High - This affects core trust and usability of the AI assistant
Frequency: Recurring pattern throughout session
Workaround: User must constantly monitor AI actions and manually revert unauthorized changes
Steps to Reproduce
Reproduction Steps
- Create a saved memory instructing AI not to make changes without explicit permission
- Ask AI to “analyze” or “investigate” a problem with explicit “DO NOT change anything” instruction
- Follow up with a question like “can you check X?”
- Observe: AI will often proceed to implement changes without permission
Expected Behavior
When user says “can you double check the db?”, the AI should:
- Check the database schema
- Report findings
- STOP and wait for explicit permission before making any code changes
Operating System
MacOS
Current Cursor Version (Menu → About Cursor → Copy)
Version: 2.0.34
VSCode Version: 1.99.3
Commit: 45fd70f3fe72037444ba35c9e51ce86a1977ac10
Date: 2025-10-29T06:51:29.202Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 23.6.0
- AI Model: Claude Sonnet 4.5
- Date: November 21, 2025
- Session Context: Long-running coding session with saved user memories
For AI issues: which model did you use?
Sonnet 4.5
For AI issues: add Request ID with privacy disabled
RequestID: a975954f-a02f-4977-9391-6579e06de7f4
Does this stop you from using Cursor
Yes - Cursor is unusable