Unable to attach cursor to running kubernetes pod

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I recently updated my cursor IDE to v3.0.9 and i am using the Kubernetes extension (v1.3.26) along with anysphere dev containers (v1.0.32).

My work environment is set up in the following way

  1. Connecting to a kubernetes login server using the remote SSH extension (v1.0.47).
  2. Installing the kubernetes extension on the login server using the remotely connected IDE.
  3. Accessing the pods from the kubernetes extension in thr IDE
  4. Attaching IDE to my pod by Right Click → “Attach Cursor”.

The 4th step does not work for me and I get an error from the IDE saying -

Error running command remote-containers.attachToK8sContainerFromViewlet: [kubectl get] Command failed with exit code 1: stdout: { “apiVersion”: “v1”, “items”: , “kind”: “List”, “metadata”: { “resourceVersion”: “” } } . This is likely caused by the extension that contributes remote-containers.attachToK8sContainerFromViewlet.

But the I have checked again and the the kubectl get command works in the terminal and I am able to access pods as well but attaching Cursor does not seem to work for some reason. Also, this was working before in one of the previous versions but after updating Cursor, I believe something must have broken and is not working for me now. I have also checked other details like correct namespace and cluster and it seems to be correctly set up.

Also manually attaching with Dev Containers gives me the same error as shown in the screenshot.

Steps to Reproduce

Steps mentioned above

Expected Behavior

Should attach with my currently running pod

Screenshots / Screen Recordings

Operating System

Linux

Version Information

Version: 3.0.9 (Universal)
VSCode Version: 1.105.1
Commit: 93e276db8a03af947eafb2d10241e2de17806c20
Date: 2026-04-03T02:06:46.446Z (3 days ago)
Layout: editor
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Darwin arm64 25.3.0

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the detailed report. I can see the screenshot with the error.

One interesting thing is that kubectl works in the terminal, but the Dev Containers extension gets an empty list. That can mean the extension is using a different namespace or a different kubeconfig context when it runs kubectl.

A few questions to narrow it down:

  1. What Cursor version were you on before the update, when everything was working?
  2. Can you share the logs from the Output panel? Go to View > Output, pick Anysphere Dev Containers in the dropdown, and copy the full log right after you try to attach.
  3. On the remote host you SSH into, can you check which context and namespace are active in kubeconfig?
kubectl config current-context
kubectl config view --minify

As a workaround, make sure the right context and namespace are selected in ~/.kube/config on the remote host. The Dev Containers extension reads kubeconfig on the remote host over SSH, not locally. If you need to switch:

kubectl config use-context <name>
kubectl config set-context --current --namespace=<namespace>

This is a known tricky area, connecting to K8s pods through SSH plus Dev Containers. I’ll pass this to the team, but the logs will help us debug faster.

Hi @deanrie , thanks for your reply. I checked the output panel for dev-containers and it shows the logs in the screenshot below. As I am only a user for this cluster I believe I do not have all permissions to list namespaces ans it seems like dev containers runs the command to access them. I do not exactly remember which previous version I was using but I think it was released in November 2025 and I did not encounter this issue there. Also I cannot do kubectl config use-context as it gives me permission issue - error: open /kubernetes/config.lock: permission denied.

Please let me know if there is a way to resolve this.

I can see the log screenshot, and the issue is clear. The Dev Containers extension runs kubectl get namespaces at the whole cluster level, but your user u-kz2ll doesn’t have permission to list namespaces at cluster scope. Because of that, the extension gets an empty list and can’t find your pod.

This is basically a bug in how the extension handles restricted RBAC permissions. I’ve shared it with the team. There’s no ETA yet, but your report helps with prioritization.

For a workaround, since kubectl config use-context is blocked because the kubeconfig is read-only (/kubernetes/config.lock: permission denied), try setting the namespace via an environment variable before launching:

export KUBECONFIG=~/.kube/config

Or, if you can create a copy of the kubeconfig in your home directory:

cp /kubernetes/config ~/.kube/config
chmod 600 ~/.kube/config
kubectl config set-context --current --namespace=<your-namespace>

After that, the Dev Containers extension should pick up the right namespace without needing to list all namespaces.

If you can’t copy the kubeconfig, let me know which namespace you need, and we can look for another workaround.

I am able to copy the kubeconfig to my home directory and have tried setting the namespace through teh steps you mentioned and also using an environment variable. But still the dev container command executes kubectl get namespaces . Please let me know if there is any other workaround