Security Gatekeeper: safeguard against obfuscated malicious code

Feature request for product/service

Cursor IDE

Describe the request

There are attacks where inspecting generated code with naked eye could be from hard to impossible. ShieldGemma / gpt-oss-safeguard/ others could serve as second “Guard/Filter LLM” inspecting and preventing main LLM created code from being infected.

Security Report: https://gemini.google.com/share/35918eaa1fca
My LinkedIn post on the topic: Sign Up | LinkedIn

Hey, thanks for the feature request!

This is an interesting idea for adding an extra layer of security for AI-generated code. Using a second LLM (like ShieldGemma or gpt-oss-safeguard) as a gatekeeper to scan code for obfuscated threats is a valid approach.

Right now, Cursor has some basic security measures (approval required for terminal commands, controls for network requests), but there isn’t an automatic scan of generated code for malicious behavior. More details: Agent Security | Cursor Docs

I’ll pass your request to the team for review. Your links to the Security Report and the LinkedIn post will help us assess how important this feature is.

1 Like

Thanks, Dean Rie,

Your reply gives me hope.

Cursor IDE recent updates delivered something no other IDE can now offer me - a very fast and focused flow experience. Polished tool. I am really rooting for you guys, I see that competition is brutal. Google’s Antigravity benefits from Google’s monopoly and gives a lot of tokens, but ide is totally clumsy and slow compared to Cursor.

1 Like