Cursor got no working re-anchoring

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I’ve noticed that Cursor very, very often tries to edit files without doing a re-anchoring first. From the default perspective of an LLM, that is of course fine, but not from the perspective of an orchestrating AI IDE.

At the moment, the end user has to ensure this themselves by using system prompts in their IDE so that a re-anchoring technique is applied before a file is edited. Very often, when working with high-quality models, especially when the context window is already very long and then a large file is edited, Cursor simply starts editing the entire file without re-anchoring first and then runs into an error at the end of the edit.

This is not only cost-intensive because the edit fails, but also architecturally inefficient.

This can be prevented by creating a system prompt that always performs a re-anchoring of the file before any file edit. From an enterprise-grade perspective, however, that is not really architecturally performant from Cursor’s side, because it could be resolved internally in a simple way through tool scans and checksum checks.

In my view, Cursor MUST ensure that every single file in the project has a checksum stored locally in some form of local storage. As soon as an LLM tries to edit a file, it should first check whether the checksum has changed compared with the file to be edited.

  • If the checksum is still the same, the file edit can be carried out.
  • If not, it should not proceed.

There are probably even better techniques than this. In theory, Cursor could also update its own system prompt so that re-anchoring is always done by default before editing.

Whatever the right solution is here is honestly not that important to me. What is extremely frustrating is that, as an end user, I currently have to make sure myself, through the IDE system prompt, that re-anchoring happens before a file is edited to avoid edit bugs of the ediit_file tool from cursor.

Steps to Reproduce

see desc

Operating System

Linux

Version Information

Version: 2.6.19
VSCode Version: 1.105.1
Commit: 224838f96445be37e3db643a163a817c15b36060
Date: 2026-03-12T04:07:27.435Z
Build Type: Stable
Release Track: Early Access
Electron: 39.4.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Linux x64 6.8.0-90-generic

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the detailed writeup and the architecture suggestions.

The issue you’re describing, where the model uses stale in-context memory of a file instead of re-reading the latest state before building edits, is something the team knows about. It’s especially noticeable in long chats with multiple edits to the same file.

For context, the edit tools do read the file from disk before they apply changes, but the model may build its edit arguments old_string from an older version of the file still in its context window. That mismatch is what causes the edit failures you’re seeing.

A few things to note:

  • There’s already a mechanism that shows user-made diffs to the model between turns, but it’s only a prompt-level hint, so the model can still work from stale context.
  • Your checksum idea and the idea of forced re-reads before edits are solid suggestions. The team is tracking improvements here, but there’s no ETA right now.

As a workaround, your system prompt approach that forces re-anchoring is actually one of the better strategies. Another user built a more elaborate rule for the same goal here: Corrupted file read & edit tools in long threads. Starting a new chat when you’re working on large files that get edited a lot also helps reduce stale context issues.

@deanrie
Thank you very much for the detailed response. I really appreciate it, and thank you for the additional contribution as well.

Yes, I understand the issue that the LLM thinks it is still inside an old string, and that is why it does not work. Of course, it is not efficient enough to reread the files in full every time beforehand just to get the current lines.

That said, if cost were irrelevant and the issue of inflating the context window were also irrelevant, then the easiest approach would naturally be to simply reread the files every time. From my own experience, I can definitely say that this always works. It is the simplest and easiest solution: through re-anchoring, you make sure that you continuously have the latest lines.

There is, for example, the first scenario where I, as the user, run ESLint AutoFix on a file or manually edit a file myself. Then, in an iterative process, when the LLM comes back to that file, if it does not do re-anchoring first, it fails.

The biggest issue I see behind this is that Cursor, with the Edit-File tool, tries to edit a large file, say 1,500 lines or so, using models like Opus 4.6 Max, for example. Then, only at the very end, on the last line, an error appears saying that the edit failed.

If you are working with a high-reasoning model, the model will first try to find the cause, and then additional token reasoning gets injected into the context. In practice, several euros are simply being thrown away here because the architecture is not modeled correctly in this respect.

If you compare it directly, it would actually be more cost-efficient to simply reread the files each time beforehand.

I can definitely say that I solved this whole issue through system prompts with re-anchoring. However, it is frustrating to keep building this into the system prompt, because I already have several additional things in the system prompt, such as in the other post I opened, where I mentioned that I cannot read the files with my MCP tools and they get saved into a local file instead, or that there are limits around the maximum number of files that can be read.

My system prompts are gradually starting to bloat, simply because my IDE does not provide these functionalities out of the box. And I already have very large system prompts anyway.

The additional problem I have is that the system prompt also keeps getting indexed into the context again and again. I understand that, from a prompting technique perspective, this is an efficient move, because continuously injecting the system prompt helps avoid hallucinations. But every one of these issues, let’s call them technical debts, where Cursor creates problem areas through certain tools or where I cannot work the way I want, has to be injected into the system prompt again and again, and that keeps being added to the context, which makes everything bloat further.

So the most efficient solution is definitely that Cursor MUST ensure that this does not happen.

At the end of the day, I’m not someone who builds AI, coding tools, extensions, or IDEs. So I don’t really know what the actual best practice is here, or how this should best be solved internally.

I don’t know whether continuous re-anchoring is the best approach, or whether it would make more sense to first check locally with some tools what the current lines are and then pass that to the LLM. But in my opinion, the end user should not have to pay for technical debt.

That means, first and foremost, the edit tool must be designed for cases like this. It should not run to 99% completion and only then show an error. It should somehow evaluate in advance whether this problem could occur, which should probably be possible.

Second, it should be considered whether re-anchoring could potentially be moved to the Cursor side in an isolated asynchronous process, with Cursor covering the cost.

I mean, this problem has effectively existed for several years already, and it’s nothing new that LLMs hallucinate and keep trying to perform re-replaces using old strings. You can see this across all IDEs and all extensions that try to edit files. In other words, edit tools always try to find the exact match for what needs to be edited. And if the exact match for the line and string cannot be found, then you get an error, etc. These are recurring problems.

That’s why I’m personally not really in favor of end users paying for failed edits. First for reading the file, then for the failed edit, and then afterward, with high-reasoning models, for the model to think through what just failed and retry the edit. Over the course of monthly usage, that adds up to hundreds of euros in technical debt paid by the end user.

If, instead, Cursor were to establish re-anchoring in an isolated, asynchronous way with its own model, for example before each individual file, and then provide the LLM with the new replace strings before the Edit File tool runs, it would probably be possible to reduce the error rate by more than 95%.

That’s just what’s coming to mind for me spontaneously right now. I really don’t know what the right solution is. But I strongly assume that if a bit of energy is put into this, these problems could be eliminated completely.

1 Like