Idea: shadcn for backend logic (using LLM manifests). Thoughts?

Hey everyone,

I wanted to get your thoughts on an idea to fix AI assistants hallucinating API integrations.

Whenever I ask an AI editor to wire up complex stuff like Stripe webhooks or Auth, it tends to invent methods unless I manually paste the right docs into the context.

So I built a prototype called Radzor https://radzor.io

The concept is basically shadcn/ui but for logic blocks.

The main twist: instead of human-readable docs, every component downloads with a radzor.manifest.json. It’s a strict contract (inputs/outputs/events) designed specifically for the LLM.

You just run npx radzor add audio-capture to get the code + manifest locally, and then tell your AI: “read the manifest and wire it up”. Because the LLM reads the exact specs directly from the file, it connects everything perfectly without guessing.

Does this approach make sense to you? Would love to know if you’d actually use this workflow or if there’s a better way to handle AI context.