Prompts Library — Research-backed prompt templates for Cursor

Hey everyone! :waving_hand:

I’ve been building a collection of prompt templates specifically designed for Cursor and Claude, and I wanted to share it with the community.

What is it?

Prompts Library — A set of battle-tested prompts for software engineering workflows.

What’s different?

Most prompt collections are “Act as X” style prompts. This library focuses on structured engineering workflows with verification built in:

  • Chain of Verification (CoVe) — Prompts check their own work before reporting
  • Spec-first approach — Define what “done” looks like before coding
  • Research-backed — Every pattern cites papers (META 2023, PKU 2024, etc.)

Available Prompts

Command What it does
“Run Audit” Deep code audit for Go/K8s
“Review PR #123 Systematic PR review
“Plan Mode” Two-phase planning workflow
“Git Polish” Clean up messy commits
“Pre-Flight” Scan a new codebase
“Create prompt for…” Generate task prompts

Includes Cursor Rules

The repo has ready-to-use rules you can paste into Settings → Rules → User Rules:

  • Depth-forcing rules (enumerate ≥3 options before deciding)
  • Token optimization (ref>paste, tables over prose)
  • Verification gate (fact-check claims independently)

Would love feedback! What workflows would you like to see added?

GitHub: GitHub - ArangoGutierrez/promptsLibrary: Research-backed AI prompt templates for software engineering workflows. Designed for Cursor IDE and Claude.