I recently updated my cursor IDE to v3.0.9 and i am using the Kubernetes extension (v1.3.26) along with anysphere dev containers (v1.0.32).
My work environment is set up in the following way
Connecting to a kubernetes login server using the remote SSH extension (v1.0.47).
Installing the kubernetes extension on the login server using the remotely connected IDE.
Accessing the pods from the kubernetes extension in thr IDE
Attaching IDE to my pod by Right Click → “Attach Cursor”.
The 4th step does not work for me and I get an error from the IDE saying -
Error running command remote-containers.attachToK8sContainerFromViewlet: [kubectl get] Command failed with exit code 1: stdout: { “apiVersion”: “v1”, “items”: , “kind”: “List”, “metadata”: { “resourceVersion”: “” } } . This is likely caused by the extension that contributes remote-containers.attachToK8sContainerFromViewlet.
But the I have checked again and the the kubectl get command works in the terminal and I am able to access pods as well but attaching Cursor does not seem to work for some reason. Also, this was working before in one of the previous versions but after updating Cursor, I believe something must have broken and is not working for me now. I have also checked other details like correct namespace and cluster and it seems to be correctly set up.
Also manually attaching with Dev Containers gives me the same error as shown in the screenshot.
Hey, thanks for the detailed report. I can see the screenshot with the error.
One interesting thing is that kubectl works in the terminal, but the Dev Containers extension gets an empty list. That can mean the extension is using a different namespace or a different kubeconfig context when it runs kubectl.
A few questions to narrow it down:
What Cursor version were you on before the update, when everything was working?
Can you share the logs from the Output panel? Go to View > Output, pick Anysphere Dev Containers in the dropdown, and copy the full log right after you try to attach.
On the remote host you SSH into, can you check which context and namespace are active in kubeconfig?
As a workaround, make sure the right context and namespace are selected in ~/.kube/config on the remote host. The Dev Containers extension reads kubeconfig on the remote host over SSH, not locally. If you need to switch:
This is a known tricky area, connecting to K8s pods through SSH plus Dev Containers. I’ll pass this to the team, but the logs will help us debug faster.
Hi @deanrie , thanks for your reply. I checked the output panel for dev-containers and it shows the logs in the screenshot below. As I am only a user for this cluster I believe I do not have all permissions to list namespaces ans it seems like dev containers runs the command to access them. I do not exactly remember which previous version I was using but I think it was released in November 2025 and I did not encounter this issue there. Also I cannot do kubectl config use-context as it gives me permission issue - error: open /kubernetes/config.lock: permission denied.
Please let me know if there is a way to resolve this.
I can see the log screenshot, and the issue is clear. The Dev Containers extension runs kubectl get namespaces at the whole cluster level, but your user u-kz2ll doesn’t have permission to list namespaces at cluster scope. Because of that, the extension gets an empty list and can’t find your pod.
This is basically a bug in how the extension handles restricted RBAC permissions. I’ve shared it with the team. There’s no ETA yet, but your report helps with prioritization.
For a workaround, since kubectl config use-context is blocked because the kubeconfig is read-only (/kubernetes/config.lock: permission denied), try setting the namespace via an environment variable before launching:
export KUBECONFIG=~/.kube/config
Or, if you can create a copy of the kubeconfig in your home directory:
I am able to copy the kubeconfig to my home directory and have tried setting the namespace through teh steps you mentioned and also using an environment variable. But still the dev container command executes kubectl get namespaces . Please let me know if there is any other workaround