Major Regression Loop in Next.js/WebRTC State Management + Context Loss Issues

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I am experiencing a severe “regression loop.” After successfully implementing a feature (e.g., WebRTC track replacement for Host video), Cursor inadvertently reverts or breaks that stable logic while attempting a new, unrelated task in the same file.

Specifically, while trying to fix the “Guest sees black screen” issue, Cursor continues to break the Background Blur and VideoEffectProcessor logic that was previously working. It seems to “forget” the established code structure between prompts, even when those files are explicitly tagged in the chat.

Steps to Reproduce

Open a Next.js project with a complex WebRTC hook (useWebRTC.ts).

Use Composer (Agent Mode) to implement a “Virtual Background” processor.

Once stable, ask the agent to fix a different sync issue (e.g., Host-to-Guest visibility).

Observe that the agent rewrites the stable background logic into a broken state or hallucinates outdated function signatures.

Expected Behavior

Persistent State Management: When the AI (Composer/Agent) suggests a fix for one component, it should maintain the existing, functional logic in connected hooks (e.g., useWebRTC.ts) without reverting to older or hallucinated versions of the code.

Seamless Track Replacement: When a host toggles “Background Blur” or a “Virtual Background,” the Guest should immediately see the processed stream via RTCRtpSender.replaceTrack(). The Guest’s screen should never drop to black or show a frozen frame during this transition.

Context Awareness: The model should recognize that the VideoEffectProcessor is the “source of truth” for the video stream and ensure all other components (like the Recording hook and Guest View) are updated to point to the processed MediaStream.

Reliable Application of Changes: When I click “Apply” on a complex plan, the code should be written into the file exactly as shown in the diff, without skipping lines or leaving “todo” comments in place of previously working logic.

Operating System

MacOS

Version Information

Version: 0.45.x
VSCode Version: 1.91.x
Commit: (long string of letters/numbers)
Date: 2026-02-xx
Electron: 30.x.x
OS: Windows_NT x64 (or Darwin for Mac)

For AI issues: which model did you use?

Opus 4.6 and composer 1.5

For AI issues: add Request ID with privacy disabled

01de2b2c-e12d-4d1b-8806-56bb6a143d3b

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the detailed report.

What you’re describing, the agent overwriting stable logic while working on a different task in the same file, is a known limitation of how LLMs handle long sessions with large, complex files. Each new prompt can drift from the established state, especially when the file is close to or exceeds the model’s context window.

A few things that can help:

  • Break up long sessions. Start a new chat for each distinct task instead of continuing in the same thread. This helps avoid quality dropping as context builds up.
  • Use Cursor Rules to anchor critical logic. Create a .cursor/rules/ file that clearly describes the architecture of your WebRTC hook, like which functions are the source of truth and which patterns must not be changed. This gives the agent consistent guidance. Docs: Rules | Cursor Docs
  • Keep files smaller. If useWebRTC.ts is very large, like 500+ lines, consider splitting it into smaller modules. Smaller files mean less risk of regressions.
  • Review diffs carefully before accepting. Use the review panel to check each change, especially in files you didn’t ask the agent to modify.

Let me know if any of these help, or if the issue keeps happening.