Special Cursor prompts to trigger upon agent completion

Currently, after the Cursor agent finishes executing a task, any follow-up actions require manual input. It would significantly improve workflow efficiency if we could define a special category of Cursor rules to automatically trigger additional prompts whenever the agent completes execution. The usefulness of this is best illustrated by the example below:

We frequently end our conversations with the following prompt for documentation or summarization after iterative tasks.

Example Use Case:
Upon the agent completing a feature or bug-fix development session, automatically trigger this prompt:

Above is a detailed conversation transcript documenting the iterative development, discussion, and implementation of a feature or fixing of a bug. Throughout this session, initial assumptions or plans may have evolved based on experimentation, practical implementation challenges, or newly uncovered insights.

Please create a concise executive summary in bullet points, covering only the categories below that actually contain relevant findings from this session:

  • Major deviations from the initial plan (only if deviations occurred).
  • Critical insights discovered through experimentation or implementation (only if notable insights emerged).
  • Notable assumptions or hypotheses that proved inaccurate and required adjustments (only if any assumptions required revision).
  • Important lessons or improvements suggested by practical experience during this session (only if valuable lessons or improvements were identified).

Avoid including or forcing entries for categories that do not have genuine or meaningful insights. Focus specifically on capturing important details or knowledge not yet reflected in official documentation or not obvious from the code alone. The objective is to enrich our documentation and knowledge base effectively and accurately so that we set up the next future developer in charge of maintaining and extending the codebase for success.

Thanks for your great feature request :slight_smile:

I personally think this would be a great addition, not just for summaries but also checking if prompt succeeded, and other cases.

Hope others also upvote it.