There are attacks where inspecting generated code with naked eye could be from hard to impossible. ShieldGemma / gpt-oss-safeguard/ others could serve as second “Guard/Filter LLM” inspecting and preventing main LLM created code from being infected.
This is an interesting idea for adding an extra layer of security for AI-generated code. Using a second LLM (like ShieldGemma or gpt-oss-safeguard) as a gatekeeper to scan code for obfuscated threats is a valid approach.
Right now, Cursor has some basic security measures (approval required for terminal commands, controls for network requests), but there isn’t an automatic scan of generated code for malicious behavior. More details: Agent Security | Cursor Docs
I’ll pass your request to the team for review. Your links to the Security Report and the LinkedIn post will help us assess how important this feature is.
Cursor IDE recent updates delivered something no other IDE can now offer me - a very fast and focused flow experience. Polished tool. I am really rooting for you guys, I see that competition is brutal. Google’s Antigravity benefits from Google’s monopoly and gives a lot of tokens, but ide is totally clumsy and slow compared to Cursor.