Self-hosted worker stuck on "Waiting for self-hosted worker" after enabling/disabling MCP toggle in cursor.com/agents

Where does the bug appear (feature/product)?

Background Agent (GitHub, Slack, Web, Linear)

Describe the Bug

After enabling the MCP toggle in the cursor.com/agents web UI for a self-hosted worker session, then disabling it, all subsequent sessions with self-hosted worker are permanently stuck at “Waiting for
self-hosted worker”. The worker connects fine but is never claimed.

Steps to Reproduce

  1. Set up a self-hosted worker (agent worker start) — sessions dispatch and work normally
  2. In cursor.com/agents, create a new session and enable the MCP toggle (Supabase)
  3. Session gets stuck at “Waiting for self-hosted worker”
  4. Disable the MCP toggle
  5. Archive all sessions, create a fresh one with self-hosted worker selected — still stuck

What the worker logs show

The worker connects successfully and receives heartbeat frames every 30 seconds:
INFO Session state after connect meta={state: “connecting”}
DEBUG Received frame meta={frameCount: 1}
DEBUG Received frame meta={frameCount: 2}

No “Worker claimed by background composer” message ever arrives. The task is never dispatched from the backend to the worker.

What was tried

  • Restarted the worker multiple times — no change
  • Cleared persisted worker ID (agent-cli-state.json) to generate fresh IDs — no change
  • Re-authenticated both on cursor.com and via agent auth on the worker machine — no change
  • Archived all sessions in cursor.com/agents — no change

Expected Behavior

Sessions with self-hosted worker should dispatch normally as they did before the MCP toggle was touched.

Operating System

Linux

Version Information

Agent version: 2026.03.30-a5d3e17

For AI issues: which model did you use?

Composer 2

For AI issues: add Request ID with privacy disabled

  • Worker ID (current): 7aa401fb-3930-428d-b3b2-49fc14347a03
  • Previous worker IDs tried: 67ed39f3-b5e5-4566-a77c-cc359f0a16a7, 2be1f3cc-c7c1-4c60-818f-a6c4beab37e2, fcf0fb2c-08e2-498c-8412-9a0720abe02e

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

Hey, thanks for the detailed report. The worker IDs and repro steps are really helpful.

This is a confirmed bug on our side. What’s happening is that existing sessions still point to a stale worker ID from an older worker instance, and there’s no recovery path. The backend keeps trying to connect to the old worker that no longer exists, even though your current worker 7aa401fb-... is healthy and visible.

The MCP toggle isn’t directly causing this, but it probably triggered a timing issue where the assigned worker disconnected during a config change and the stored worker ID went stale. New sessions might also be picking up snapshot data that references old worker IDs.

I’ve flagged this with the team. No timeline yet, but your report helps us prioritize.

For now, a possible workaround: create a completely new session, not from a snapshot or previous session, and make sure your current worker is the only one that’s ever been registered. If you haven’t already, try revoking and re-generating your worker auth so no old worker IDs are still tied to your account. Let me know if that changes anything.

Thanks a lot @deanrie

I’ve followed your advice on how to recover this situation. This is what I’ve done s it can help others:

The fix — exact steps:

  1. Kill the running worker process
  2. Full logout — agent logout (removes auth tokens completely, not just a re-login)
  3. Clear local worker ID state — echo ‘{“version”:1,“workerIdsByDisplayName”:{}}’ > ~/.cursor/agent-cli-state.json
  4. Fresh login — agent login (generates a new auth token with no stale worker IDs attached)
  5. Start the worker — agent worker start
  6. Create a brand new session in cursor.com/agents (not from a snapshot)

The key step was agent logout before agent login. A re-login alone doesn’t revoke the old token — you need to fully revoke first so the new token has no history of old worker IDs on the backend.

Glad that helped. And thanks for writing out the exact recovery steps, it’s really useful for other user who run into the same thing.

The key point is running agent logout before agent login. A simple re-login doesn’t reset the backend link to the old worker ID.

I’ll mark this as solved. If it happens again, just message me.