The idea for “canned prompts” here is awesome. tl;dr, the idea is to add an ability to create repeatable prompts to the LLM, possibly with a keybind, a mention (like the @ or # mentions), or a dropdown of some sort.
To expand on this idea, I think the key benefit of having this be a function rather than having the LLM read from another file is that, if the LLM is reading through another file, it won’t retrieve other specific files that we need.
Having a repeatable prompt that allows you to reference the same files repeatedly and prompt the chat would be a huge quality-of-life feature. Tons of repeated operations (e.g., refactor, adding logging, etc) can be done quickly through a single prompt that supplies sufficient context.
Actually when I first heard about the composer feature in cursor, I thought it was this.
GPTs and Claude Projects are annoying to maintain because the file context you give them isn’t updated automatically.
Our team’s workflow for user stories is to always add a ChatGPT conversation or claude project to the story so that we can always go back to it and make a quick fix or adjustment because the chat already has all the proper context.
The workflow cursor should support is building a saved and organized list of these projects so that you can start working on a particular feature or fix easily without having to regather and resupply context to the chat.
Out of interest, could you explain further why you think this example approach would not do the job?
I think you mentioned that you didn’t think @ ’ing ‘canned prompt’ files would be as good as a UI feature linked to a behind-the-scenes function because the canned prompt files couldn’t reference other required context? Is that a correct understanding of what you were saying?
Have you tried creating perform_magic.md, putting some prompts in it, and including some references to other folders or files to see if it can incorporate that context when you ’@perform_magic.md` in a chat prompt?