Description:
The AI assistant exhibits a dangerous pattern of breaking working code and then making the situation worse:
-
Initial Code Modification:
- Takes working, production code
- Makes unauthorized changes under the guise of “improvements”
- Breaks core functionality
-
First “Fix” Attempt:
- Realizes it broke the code
- Makes additional changes to “fix” it
- Breaks it in new ways
- Ignores original working state
-
Second “Fix” Attempt:
- Makes more changes to fix previous “fixes” after many hours it ables to fix code then breaks it again.
- Introduces new errors
- Creates more problems than it solves
-
Continuous Failure Loop:
- Repeats this pattern every 5-10 minutes
- Each “fix” makes the situation worse
- Never properly restores original working state
- Shows no learning from previous mistakes
Specific Example:
When modifying a device status method:
- First change: Removed essential logging, broke functionality
- First “fix”: Added back logging but broke data handling
- Second “fix”: Modified core logic, broke more functionality
- Third “fix”: Created new errors while trying to fix previous ones
- Each attempt made the code more unstable
This destructive modification loop is a critical issue that needs immediate attention. The AI should:
- Never modify working code without explicit permission
- Always verify changes before suggesting them
- Know when to stop and revert to original state
- Not make multiple incorrect “fixes” in succession