Is anyone else worried about "AI Package Hallucination" in their builds?

I’ve been vibe coding a few projects lately and noticed the AI occasionally suggests NPM/Python packages that don’t actually exist (or are deprecated/insecure).

I’m thinking of building a tiny, $0 scanner (a GitHub Action or CLI) that automatically scanspackage.json / requirements.txtafter an AI edit to flag “fake” or high-risk libraries before you deploy.

Would this be useful to anyone here, or am I overthinking the security side of vibes?

Which model? I used to get this behaviour when using gpt-4.1 in the past. With the newer models its much less.

That’s a fair point. The top-tier models are getting better, but I’m still seeing ‘hallucinated’ packages when using smaller/faster models for quick iterations (or when the prompt gets too complex).

I’m thinking of a ‘Safety Net’ that works regardless of the model—basically a 1-second check before you npm install something that might not exist. Would a ‘Vibe-Check’ CLI that flags these in your terminal be overkill, or a nice peace-of-mind tool?