Since GPT-4 is very steerable, I would love the ability to save and edit custom prompts in Cursor to be able to reuse common instructions and include it in the chat using @Prompts or something similar (just like @Docs for example).
I guess this is already possible by maybe creating text files of instructions and using @Files but a dedicated @Prompts tag + add/edit/delete functionality would be much better UX for creating reusable custom instructions that are accessible from any project in Cursor.
I had the issue of trying out the ārules for aiā feature, which is basically a custom prompt, but it interfered many times, where it would output code (since that was the rule) when I just wanted text.
So basically one custom prompt limited me to use it for only one purpose, and changing or disabling the prompt was very tedious. At the end I therefore decided to not use it anymore.
An @Prompt feature with different pre-defined prompts for different purposes would be great!
Edit: Just found a comment for the same topic. How big is the impact of a custom prompt?
Most of the time gpt understands what I want, or I have to iterate once or twice.
Regarding the requested feature of enabling custom prompts within Cursor:
Several workarounds have been suggested, such as canned prompts, using Espanso, or leveraging notepads. While these solutions might work in specific scenarios, they are not streamlined. To ensure an efficient workflow, I believe this feature should include the following key capabilities:
First-party integration within Cursor IDE with intuitive triggering via @ commands, e.g., @Prompt.
Per-project customization using version control, allowing prompts to be stored and managed alongside project files.
Support for referencing external files or other prompts via markdown links or @ commands.
System variable interpolation, such as ${user_input}, to dynamically insert user input into prompts.
Proposed Workflow
Users define prompt files within the repository, possibly inside .cursor/prompts/, similar to how .cursor/rules/ allows per-project rules.
Example prompt file:
// .cursor/prompts/my-prompt.md
Refine the following user input for clarity.
Take into account the current tech stack: @docs/tech-stack.md
(Alternatively, using markdown links: [tech-stack](../docs/tech-stack.md))
**User input:**
${user_input}
Users reference the prompt in the chat box using the @ symbol:
@my-prompt [the question to be refined...]
When sent to the LLM, this expands into:
Refine the following user input for clarity.
Take into account the current tech stack:
[... interpolated content from docs/tech-stack.md]
**User input:**
[... interpolated question]
Any update on this? Sure, keeping a prompts.md file on the side and copy pasting is not that painful, but having it built-in would be nice, and could also open up interesting templating features