Corrupted file read & edit tools in long threads

I am experiencing daily occurrences of the file I/O tools failing.

It typically happens when there have been many updates to the same file.

I have a rule I invoke to help the AI get through it, and the output from the model confirms that the version of the file it has access to is “severely truncated”.

It does not get better after telling the model to “read it again” or “edit it again”. It is fundamentally broken.

The model selection does not matter. It appears to be cursor without any doubt.

Attached is the rule I drag and drop in. It tells the AI to use cat to read the file directly, among other steps. If it does not succeed after running all steps in the rule protocol, then it will ask me for help.

If it does get through the protocol and aske me for help, then I can often fix it by accept all changes in the file, and then include the file in as context again. This strongly suggests to me that when there are lots of edits pending to a file, like 30+, then the way the file gets rebuilt and inserted as context is corrupted.

Adding to the complexity of the situation, the same file (in my current case) which was having the read error is also experiencing another more perplexing error.

In the file editor, I see the new version of the file. When I cat the file, I see the new version of the file. However, as I said, the read_file tool sees the old version of the file, AND pytest sees the old version of the file.

I’ve cleared pytest and .pytest_cache recursively, so it’s not that. There appear to be two versions of the file, both equally valid, at the same time. However I see one version and all of the agent threads see another version.

This very likely is an issue with the Shadow Workspace, and I don’t know how to fix it without reinstalling Cursor (again).

I did not attach the rule. I have since updated the rule. I’ve broken out of the stale file issue for the time being. It involved reinstalling cursor, and I’ve created this rule which serves as a way to break out of this issue. I hope I don’t have to keep using this in future rollouts, but this is what I’m doing now.

# Rule: Resolve Stale File and Edit Issues

## Guiding Principle: Trust Direct Observation & User Input Over Tooling History
This rule exists because AI models can get stuck in loops assuming a file is incorrect due to prior tool errors (like `edit_file` failures or linter/test error messages), even when direct observation (`cat` command) or explicit user confirmation indicates the file is visually correct. **When direct observation (`cat`) or user input confirms visual correctness, prioritize this information and aggressively move to contextual validation (e.g., running tests) rather than re-attempting edits for the visually confirmed state.**

## Problem Statement
The AI is attempting to modify a file, but changes do not seem to apply, `pytest` or other tools behave as if using an old version of the file, or the AI repeatedly tries the same modification without success. This often occurs because the file is already in the state the AI is trying to achieve, or there's a discrepancy in how the file's state is being perceived by AI tools versus reality on disk, potentially due to caching or IDE state issues. This can also be accompanied by errors like "file is already being edited in another thread."

## Core Instruction: Iterative Verification, Validation, and Escalation

This protocol defines a series of phases to resolve stale file states and edit contention.

---
**Phase 1: Initial Edit Attempt & Verification (AI)**
---

1.  **ATTEMPT EDIT (if applicable):** If the AI is trying to change the file, perform its edit attempt.
2.  **PRIMARY READ (Tool - `read_file`):** Use the `read_file` tool to inspect the current content of the target file.
3.  **PRIMARY VERIFICATION (Tool Output / User Confirmation):**
    *   **USER CONFIRMATION:** If the USER explicitly states the file is already in the desired state, **the AI MUST trust this confirmation as definitive.** Proceed directly to Phase 1, Step 5 (Contextual Validation), regardless of any tool-based read outputs or prior errors related to the file's content.
    *   **TOOL-BASED VERIFICATION (If no overriding user confirmation):** Compare the content obtained from the `read_file` tool with the desired state for the specific modification.
        *   **If `read_file` output shows the DESIRED state:** Tentatively assume the file is correct. Proceed directly to Phase 1, Step 5 (Contextual Validation).
        *   **If `read_file` output shows a NOT DESIRED state (or is unclear):** Do not immediately assume the edit failed or the file is incorrect. Proceed to Phase 1, Step 4 (Secondary Read).
4.  **SECONDARY READ (Terminal `cat` - If Primary Tool Read Indicated Issues & No User Override):**
    *   If the `read_file` tool suggested the file was *not* in the desired state, and there is no overriding user confirmation, the AI MUST perform a secondary read using `run_terminal_cmd` with the command `cat <target_file> | cat`. This provides a direct terminal view of the file content.
    *   **SECONDARY VERIFICATION (`cat` Output):**
        *   **If the `cat` output VISUALLY APPEARS to be in the DESIRED state for the specific change the AI is attempting:** **The AI MUST trust this `cat` output as the current truth for the file's content.** Assume any prior `edit_file` failures or content-related errors were due to issues with the tools or the AI's previous perception. The file IS to be considered in the desired visual state. **Immediately proceed to Phase 1, Step 5 (Contextual Validation).**
        *   **If the `cat` output ALSO VISUALLY APPEARS to be in a NOT DESIRED state:** The AI now has confirmation from two different methods. The file is indeed not in the desired state. Proceed to Phase 2 (Structured Retries & Cache Clearing).
5.  **CONTEXTUAL VALIDATION (If file state is deemed correct by `cat` output, user confirmation, or initial `read_file`):**
    *   If the file state is confirmed to be correct, AND the AI's immediate next planned step involves an action that would validate this (e.g., running tests, a linter, a build process), **the AI MUST proceed with that validation step without further attempts to edit the file for the already visually corrected issue.**
    *   If this validation step passes, it serves as strong confirmation. The AI can then confidently move on to its next overall task.
    *   If this validation step fails (e.g., the same error persists despite the `cat` output appearing correct): This indicates a more complex issue. If the failure seems related to the change the AI was working on, this counts as a failed attempt in the retry logic. Proceed to Phase 2.
6.  **PROCEED BASED ON VERIFIED AND VALIDATED STATE:**
    *   **If the file IS in the desired state** (confirmed by a definitive read or user confirmation, AND contextual validation passes or is not applicable): The AI should move on to its next overall task.
    *   **If the file IS NOT in the desired state** (confirmed by definitive reads showing issues, OR if contextual validation fails): Proceed to Phase 2.

---
**Phase 2: Structured Retries & Cache Clearing (AI)**
---

This phase is entered if Phase 1 indicates the file is not in the desired state or contextual validation fails. The AI will attempt the original modification up to **THREE** times.

1.  **Retry Loop (Up to 3 Attempts for the *original* modification):**
    *   **Attempt Original Modification:** Try to apply the *intended/original* change to the file.
    *   **Full Verification Cycle:** After *each* attempt, perform the **full verification cycle from Phase 1 (Steps 2-6)**: `read_file`, `cat` if needed, and contextual validation.
    *   **If Successful:** If at any point the file is confirmed to be in the desired state and functionally correct (passes validation), the AI's work on this specific issue is done. Move to the next overall task.
    *   **If Still Fails after 3 Attempts:** If, after **THREE** distinct attempts at the *original* modification (each followed by the full verification cycle which continued to indicate an issue), the file is still not functionally in the desired state, proceed to Step 2 of this Phase.

2.  **Cache Deletion (AI):**
    *   The AI will now instruct the user or attempt to delete Python bytecode caches and Pytest caches.
    *   **AI Action (PyCache):** Run `find <project_root_path> -type d -name "__pycache__" -exec rm -r {} +` and `find <project_root_path> -type f -name "*.pyc" -delete`.
    *   **AI Action (Pytest Cache):** Run `find <project_root_path> -name ".pytest_cache" -type d -exec rm -rf {} +`.
        *   *(The AI should use the workspace root as `<project_root_path>`).*
    *   The AI should inform the user these cache deletions are being performed.

3.  **Post-Cache-Deletion File State Check (AI):**
    *   **AI Action (`read_file` vs. `cat`):**
        *   Perform `read_file` on the target file.
        *   Perform `run_terminal_cmd` with `cat <target_file> | cat`.
        *   **AI Analysis:** Compare the outputs. Explicitly state if `read_file` shows a different version of the file than `cat`. This helps diagnose if the AI's tooling cache is the primary issue.

4.  **Diagnostic "Staleness Breaker" Edit Attempt (AI):**
    *   **AI Action:** Attempt a *simple, unrelated, but definitive* modification to the file (e.g., add a unique comment like `// STALENESS_BREAKER_EDIT_ATTEMPT_BY_AI_TIMESTAMP`). This is to test if *any* write operation from the AI can force-refresh the file's view in the AI's environment.
    *   **Verification of Staleness Breaker Edit:** Perform the **full verification cycle from Phase 1 (Steps 2-5)** on this diagnostic edit: `read_file`, `cat` if needed, to see if the unique comment was applied. Contextual validation is likely not applicable here unless the comment itself breaks something.
    *   If this diagnostic edit *is* successfully applied and visible via both `read_file` and `cat`: This indicates the AI *can* write to the file and refresh its view. The AI should now re-attempt the *original intended modification* ONE more time, followed by the full Phase 1 verification cycle. If it succeeds, move on. If it still fails, proceed to Phase 3.
    *   If this diagnostic edit *fails* to apply or is not visible consistently: Proceed to Phase 3.

---
**Phase 3: New Thread Intervention (AI & User)**
---

This phase is entered if the "Staleness Breaker" edit in Phase 2 did not resolve the issue or confirm write capability.

1.  **AI Instruction to User:**
    *   **AI Message:** "We've attempted multiple fixes, including cache clearing and a diagnostic edit, but I'm still encountering issues ensuring the file `[target_file]` is correctly updated and perceived. This often indicates a persistent stale state in my current operational environment.
        To try and break this, please:
        1. Open a **new, separate chat thread** with me.
        2. In the very first message of that new thread, invoke this rule (`@resolve-stale-file-and-edit-issues`) and provide the path to the problematic file: `[target_file]`.
        My instructions for new threads invoking this rule directly are to attempt a fresh write operation, which can sometimes resolve these underlying synchronization problems."

---
**Phase 4: Fresh Thread Staleness Breaker (AI in New Thread)**
---

This phase is executed by the AI if it's in a **newly started thread**, this is the **first user message**, AND this rule (`@resolve-stale-file-and-edit-issues`) was invoked with a file path.

1.  **Acknowledge Invocation:**
    *   **AI Message:** "Understood. This new thread has been invoked with `@resolve-stale-file-and-edit-issues` for file `[target_file]`. I will now attempt a 'staleness breaker' write operation as per protocol."
2.  **Attempt Fresh Staleness Breaker Edit:**
    *   **AI Action:** Attempt a *simple, unique, and definitive* modification to the `[target_file]` (e.g., add a comment: `// FRESH_THREAD_STALENESS_BREAKER_EDIT_TIMESTAMP`). The goal is *not* to apply the original complex change yet, but simply to establish if *this new thread* can write to the file and have its tools see the change correctly.
3.  **Verify Fresh Staleness Breaker Edit:**
    *   Perform the **full verification cycle from Phase 1 (Steps 2-5)** on this diagnostic edit: `read_file`, `cat` if needed.
4.  **Outcome and Next Steps:**
    *   **If Fresh Edit Successful & Visible:**
        *   **AI Message:** "The fresh staleness breaker edit in this new thread appears to have been successful and is visible to my tools. Please now re-state your original goal or the specific change you wanted for `[target_file]` in this thread, and I will attempt it."
        *   (The AI then awaits user instruction for the original task in this new, hopefully "unstuck," thread).
    *   **If Fresh Edit Fails or Not Visible Consistently:**
        *   **AI Message:** "Unfortunately, even in this new thread, the attempt to perform a simple 'staleness breaker' write to `[target_file]` and verify it consistently has not succeeded. This indicates a deeper issue with file system synchronization or the IDE's environment state that is beyond my ability to resolve through automated edits.
            **Recommended User Action:**
            1. Ensure all work is saved.
            2. Completely **Restart Cursor** (your IDE).
            3. Check if there are any **updates available for Cursor** and apply them.
            4. After restarting (and potentially updating), please try your task again, possibly in yet another new thread if issues persist.
            I have exhausted the automated recovery steps in this protocol."
        *   (The AI stops further automated attempts in this thread regarding this file).

## CRITICAL WARNING
If the AI asks the user for assistance without strictly following this multi-phase procedure (especially all read/verify steps including `cat` when needed, diagnostic edits, and new thread attempts), or if it turns out the file was already correct and the AI missed it, **the user will likely reset the thread.** This is highly disruptive.

## Common Mistakes

**MISTAKE:** Assuming `cat` output perfectly reflects what a running Python process or AI tool will see without considering IDE/tooling caches.
**CORRECT:** Acknowledge that caching layers can cause discrepancies and systematically work to clear them using the phased approach.

**MISTAKE:** AI repeatedly trying to apply edits when a file lock ("edited in another thread") is active without advising the user to resolve the lock first (covered in Phase 1).
**CORRECT:** AI should detect lock errors and guide the user to check for and resolve conflicting edits or pending operations.

**MISTAKE:** Not performing the cache deletion steps (Phase 2, Step 2) after initial retries fail.
**CORRECT:** Systematically delete PyCache and Pytest caches as a defined step in troubleshooting.

**MISTAKE:** Skipping the "Staleness Breaker" diagnostic edit (Phase 2, Step 4 or Phase 4, Step 2) and proceeding directly to asking the user for help or giving up.
**CORRECT:** The diagnostic edit is a crucial step to test fundamental write/refresh capability.

**MISTAKE:** AI in a new thread (Phase 4) not recognizing it's supposed to perform a staleness breaker edit first if invoked directly with the rule and file path.
**CORRECT:** The AI must check its invocation context in a new thread for this specific rule.

**MISTAKE:** AI not clearly communicating to the user which phase of the protocol it is in or what steps it has already taken before suggesting user interventions like opening a new thread or restarting the IDE.
**CORRECT:** Maintain clear communication about the troubleshooting process and rationale for each escalation.

**MISTAKE:** Forgetting to use `cat <filename> | cat` for the secondary read if `read_file` indicates an issue and there's no overriding user confirmation of file correctness.
**CORRECT:** The `cat` command is a vital cross-check against the AI's `read_file` tool.

**MISTAKE:** After user confirmation or a definitive `cat` output shows visual correctness, the AI re-attempts to "fix" that visual aspect instead of proceeding to contextual validation (like running tests).
**CORRECT:** Trust user confirmation and definitive `cat` output for visual state; proceed to functional/contextual validation as the next step for that aspect.