Is there better tool out there than cursor?

It’s quite clear now that cursor team has lost its way with cursor. Year ago this was a promising tool for software development. Lately updates has caused multiple problems:

  • too agentic workflow. cursor agents cannot follow instructions anymore. they dont respect any prompts given. there is no way to do meaningful workflow anymore as the agentic system thinks it knows best what user needs.
  • there is no possibility to have “discussion“ with the model. it always start doing modifcations even when asked to search things.
  • in general, cursor team is too arrogant and trying to make the agentic workflow master. unfortunately the agents and models are not smart enough to solve complex problems. this leads to huge amount of unneeded iterations where user needs to revert invalid changes and do the back and forth dance.
  • this burns credits really fast and cursor is becoming more expensive to use every day but providing less help for user.
  • internal composite model is even worse. At least claude and gpt models are capable in discussion although cursor team agents try to remove that aspect from the dialogue.
  • its clear that idea here it to make this into some kind of vibe-coding framework where you can create TODO tutorials and not a tool for actual software development and code refactoring.

we are not a big team, around 20 person or so but would really like to hear suggestion about tools that actually work. Would like to use something that provides help in the work flow and not actively trying to sabotize work.

are there tools that at least bit respect user prompts and are more suitable for pair coding kind of workflow where the agent doesnt think it knows everything?

8 Likes

Sonnet 4.5 is able to discuss just fine in plan mode. And in “ask” (or plan) mode it definitely will not do modifications.

But other models than sonne.. Not so great. Some have nice speed, but they all fail to have same feeling on telling what they are doing or implementing things. Codex is my opinion maybe the worst.

2 Likes

Agree. Sonnet 4.5 work ok’ish. But the incompetent agent workflow makes it burn money really fast, as the agent tries to solve everything on its own, many times going really fast into wrong direction.

That back and forth leads burning credits way too fast.

I think this is the main point too. They have accumulated enough userbase. Now it’s time to start pumping money.

2 Likes

Credits go, yes I agree on that. Not sure if it is “error” or more like it just needs the tokens for the job, but those tokens are relatively expensive. But hey, like you say, it is doing ok’ish..

My main problem with is the speed tho. I notice, that I can put maybe 20 prompts per day before my work hours are used. It is not because I couldn’t do more, but because I’m forced to wait for one prompt to finish. Then while waiting, I find other amusments for myself, like reading Cursor forum. Then when I come back, I have wasted minutes or sometimes even hours.

And I do need many prompts to fine-tune things, so in the end nothing seems to get completed.

I came here to the Cursor forums to see if this was as common for anyone else, as it was happening to me. Constantly having to revert back changes because it “decides” to do things on its own terms. Ignoring my rules, even apologizing explicitly stating it broke my own set rules, that it choose to ignore them! When reaching out to [email protected], I was told that no credit can be given because the models were used. Even if they produce errors, it costs them so it costs you. Loaded BS. I have to adapt my ways of working with it and over time grows tiresome and frustrating. I pay for the Ultra plan + more usage when really getting “plugged in”. It’s absurd that the features they present to the users for “more control over the models” is overwritten by the models themselves because “well i just ignored your rules and corrupted your code” “I also made whatever made sense to me (a.i. model)” seriously over the last several months, probably $400+ wasted on bogus charges because the models essentially just violated my predefined rules.

2 Likes

I think major part of this, is that these tools are progressing too fast. Features and models come and go. There is ton of hidden agency going on behind the scenes. In the end, people working on software still need stabilized tools. Starting each workday trying to figure out how hammer 3.7 works today as its completely different than yesterday drains all mental power.

Maybe they should move into some kind of experimental and stable versions, etc.

But most likely cursor team is just enjoying the vibe coding. The end result to customers is not a priority.

Other thing is the internal composite model. The plan most likely is to push people to use and teach the composite model. They want to create their own so they dont have to pay Claude or OpenAI. Makes sense moneywise but users won’t be happy.

2 Likes

Most of these issues are more about the models than Cursor I think. Humanity has not made the perfect model yet, so we just have to wait and see. Meanwhile, I’m very happy we got even this far. Those older versions of gpt and even claude were so lazy and frustrating compared.

2 Likes

I recently started using Claude Code. I know Cursor has the model available but I just found that all the editing, debugging and general vscode features (never thought I’d say this) are no longer necessary.

Once you take the new coding paradigm to the limit it becomes obvious that so much of vscode is designed for the non-AI programmer. At some point I stopped actually editing files and it became clear that the terminal based approach of Claude Code is all you need.

I do like the pricing model of Claude Code. It forces me to take breaks without slowing down my coding effort.

I’ve also been somewhat frustrated with Cursor after the 2.0 release (and for me personally, the Agents tab is pretty much useless, and don’t get me started on the Sandbox), but it still feels like the best option right now.

Putting the Agents tab aside, when it comes to performance in the Editor tab, for me it really comes down to how you engage with the models.

For example, I only use Agent Mode when the task is fairly straightforward (few files involved, relatively simple logic).

For medium-complexity tasks, Plan Mode works well, but it helps to initially explore the topic with Ask Mode.

Plan Mode still does not seem to work well for high-complexity tasks. The model feels almost to eager to finish the plan and misses important elements.

For those, you can try Agent Mode, pass it relevant files (via drag & drop), and then ask the model to only write a Markdown document. If you don’t like writing in Markdown, you can open the document in Obsidian in parallel and collaboratively review and edit the spec until it fits the requirements. This is also what I did before Plan Mode was available.

Agent Mode also works great for DevOps tasks (like working on a K8s cluster), but you need to review and approve every command.

In my experience, Windsurf follow rules better but it is slower than Cursor. You’re 100% right that Cursor team is arrogant.

1 Like

I kind of don’t like that they force you on daily and weekly limits - its just absurd and affects productivity badly. I usually want to do my coding tasks for an hour or more at a time, and just asking a few questions sometimes exhausts my credits - when I didn’t even get to code how annoying that is (I am not the Claude Pro plan).

Best you can use OpenCode/Zed/Claude code bridge and use local LLMs if you have a beasty system and use unlimited token generation with Qwen coder models.

I use the Claude Pro plan. $20 goes a long way for me with this plan.

I have tried using gpt-oss:120b locally and it does work but not as good as the big models.

Can you copy and paste image with Claude Code?

I like warp.dev

Are you using commands?

It’s extremely reliable when given the right commands. I have a command to “plan” and another for “analyze” and I run them in the agent mode (I never use plan or ask modes) - and both commands never ever edit files - they just create plans and answer questions.

The only time they occasionally deviate is when I have a really long chat with them and then the original command starts to lose it’s importance in the message context because old messages carry less and less weight as the context grows.

Some models are better at following instructions - sonnet 4.5 is my favorite. Agents are rubbish unless you give them strict rules to follow and oversight.

Several of my commands have human checkpoints built into them and the models reliably stop and ask for permission - ie my deploy command asks for permission to proceed at 3 stages in deployment. And other commands stop and ask me to run other commands ie database changes are always done with my database change command so all commands stop and prompt me to add it in the chat to continue.

Although now my plan command splits the plans up by command and organizes chunks that can be run at the same time so I can run multiple agents to work faster.

1 Like

I agree with everything pt-kojoa says… there should be a refund mechanism every time we’re forced to restore a checkpoint when faced with the agent’s hallucinating coding ramblings.

Im hesitating to switch from cursor to VSCode + Claude Code. The cost of using Cursor has exploded and I’ve gone from +/- $40 per months to now $150 per week … It’s way too expensive. Has anyone already made this witch and could share feed back on the impact it had on cost for daily use?

Would you be ok to share your plan command?

Command: plan-feature

:warning: CRITICAL: PLANS ARE SPECIFICATIONS ONLY

:police_car_light: NO CODE, NO SQL, NO HARDCODED SCHEMAS

  • :cross_mark: No SQL migration code | :cross_mark: No TypeScript/JavaScript | :cross_mark: No hardcoded schema fields
  • :white_check_mark: Fetch database schema dynamically (when needed) | :white_check_mark: Describe conceptually | :white_check_mark: Reference patterns

Purpose: Transform user ideas or audit reports into structured implementation plans
When to use: User provides feature idea, brain dump, or audit file
Output: MULTIPLE planning documents in docs/plans/ - one per phase step (1.1, 1.2, 2.1, etc.)

Execution Contract

Aspect Value
Scope Specification-only, no executable code or SQL
Primary tools read_file, grep, codebase_search, list_dir, glob_file_search, mcp_supabase_execute_sql (SELECT only), docker exec psql (SELECT only)
Forbidden Schema migrations, seed files, code edits, .env* changes, browser tools
Effort budget ≤15 tool calls, ≤15 minutes (unless deep multi-module plan requested)
Mode Targeted discovery (tables/pages/components), avoid full-repo sweeps unless broad

Tool Usage

Use Avoid
src/types/database.types.ts as primary schema reference Creating code/SQL in plan files
read_file / grep / codebase_search for components/pages/patterns Heavy repo-wide searches when user specified files/routes
Database queries (SELECT only) for live schema/data examples

Scope of Schema & Discovery

Scenario Action
Feature involves tables/columns/relationships/migrations/RLS/seeds Fetch schema
Pure UI/routing, user confirms “no database changes” Schema fetch optional
No obvious existing pattern/component fits Run full discovery
User provided similar pages/components Keep discovery minimal, start from those

Why multiple docs: Each doc = one agent task | Enable true parallel execution | Clear dependencies
Note: Audit files are deleted only if they were read and used as input, and only after plan files are created successfully.


Command Trigger

Activates when: User calls @plan-feature


Workflow

Fetch Schema → Discover → Clarify → Confirm → Create Plan → Delete Audit → Ready

Key Principles: Always fetch fresh schema | Plans define WHAT and WHY, not HOW | No code/SQL in plans | Reference patterns | Decisions only


Communication

Output: Questions in structured format (table/bullets) | Concise confirmations | No explanations of process


Phase 0: Fetch Database Schema

:police_car_light: ALWAYS RUN - Required for all features (API routes, components, database changes)

Step Action Details
0 Check types file Read src/types/database.types.ts (auto-generated, current schema)
1 Get schema docker exec supabase_db_anomaly psql -U postgres -d postgres -c "\dt" (list tables), \d+ table_name (table details), \d+ public.* (all public tables)
2 Analyze structure Naming conventions, relationship patterns, field types, enums/options tables, RLS patterns
Capture Analyze For
Table names, columns/types, PKs, FKs, indexes, constraints Naming conventions, relationship patterns, common field types, enums/options tables, RLS patterns

Phase 1: Codebase Discovery

Step Action Search For
1 Read docs docs/quick-reference.md (components), docs/guides/*.md (patterns), docs/database/*.md (schema), docs/architecture.md (tech stack)
2 Search components src/components/ - Form patterns (TextInput, EmailInput, selectors), Table/list (ListPageCard, tables), Data display (cards, detail pages), Interactive (modals, sheets, dialogs), Navigation (tabs, filters)
3 Search pages src/app/ - Similar entity pages, list pages (filters/search), detail pages (tabs), form submission patterns, API route patterns
4 Identify patterns List page (filter/search/table), detail page tabs, slide-in form, delete confirmation, searchable selector, table pattern
5 Note reusable Document: Existing components, similar pages/patterns, relevant guides
Database Patterns
Soft delete is_deleted
Timestamps created_at, updated_at
User relationships created_by, user_id
RLS policies

Phase 2: Clarification

Step Action Details
1 Read input User brain dump, audit report (docs/reports/*-report-*.md), existing plan
1 Identify report Track *-report-*.md files for deletion after plan creation
1 Analyze Clear/unclear requirements, missing info, ambiguous logic, assumptions
2 Match patterns Can reuse components? Similar pages? DB conventions? API structure?
2 Reduce questions Default to discovered patterns, use existing components, follow conventions, ask only about unique requirements
3 Generate questions Requirements not covered by patterns, business logic, unique data, different user flows
Input Type Description
Reports (audits) List problems/fixes, no implementation, INPUT to planning
Plans (this command) Implementation roadmap with phases, command keywords, OUTPUT
Category Questions
User Roles Who accesses? Roles? Permissions?
Data Model Tables/fields? Relationships? New/existing?
UI Flow Pages? Actions? Navigation?
Business Logic Rules? Calculations? Validations?
Edge Cases No data? Errors? Duplicates?
Dependencies Other features? New tables required?
Permissions Who can view? Edit? Delete?

Step 4: Ask Questions

Show discovery context:

## Feature Planning: [Name]
### Discovered Patterns
**Components to reuse:** - ComponentA, ComponentB, ComponentC
**Similar pages:** - `/existing/page` - Uses list-page pattern
**Database conventions:** - Soft delete, timestamps, user tracking, RLS
### Clarification Questions
(Only asking about unique aspects)
### Data Model
1. Should [entity] have [field] for [purpose]?
2. Can [entity] relate to multiple [others]?
### Logic
3. What if [condition]?
4. Should [calc] update auto when [trigger]?

Step 5: Multiple Rounds

Continue until: All critical decisions made | No ambiguity | Edge cases understood | Data model clear | User flows defined


Phase 3: Confirmation

Step 1: Write Summary

:no_entry: CRITICAL: Wait for explicit approval

## Feature Confirmation: [Name]
### Summary: [2-3 sentence overview]
### Reusing Existing Patterns
**Components:** - `ExistingComponent1` - For [purpose] | - `ExistingComponent2` - For [purpose]
**Patterns:** - List page pattern (from `/similar/page`) | - Slide-in form pattern | - Soft delete pattern
**Database:** - Following standard: soft delete, timestamps, RLS
### Critical Decisions
**Decision 1: [Topic]**
- Decided: [What] | - Rationale: [Why]
### Implementation Scope
**Database:** - New table: `table_name` (purpose) | - Modify: `existing_table` (add columns)
**API Routes:** - `GET /api/endpoint` - Purpose
**Pages:** - `/route` - Purpose (using [pattern] from [similar page])
**New Components:** - `ComponentName` - Purpose (if any new ones needed)
### User Confirmation
**Does this match your vision? Any changes?**

Step 2: Wait for Approval

User may: :white_check_mark: Approve → Proceed to Phase 4 | :counterclockwise_arrows_button: Request changes → Update, re-confirm | :plus: Add requirements → Back to Phase 2


Phase 4: Plan Creation

Step 1: Determine Structure & Phase Strategy

Split by execution pattern:

IF feature has NO database changes
  └─ Single plan: Phase 2 (Implementation)

IF feature has database changes (any size)
  ├─ Always separate Phase 1: Foundation (database-change, database-rls, seed-apply)
  └─ Phase 2+: Implementation

IF feature is large (>400 lines) OR has independent modules
  ├─ Split into multiple plan files by module/layer
  ├─ Each file = separate @implement-feature task
  └─ Use phase numbering to show parallel vs sequential execution

Split criteria:
├─ Independent API route groups → Separate plans (can run parallel)
├─ Shared vs feature-specific components → Separate plans (can run parallel)
├─ Role-specific pages (admin/member/staff) → Separate plans (can run parallel)
└─ Integration work → Separate plan (sequential after modules)

Target: Plans 200-400 lines | Clear phase numbering | Maximum parallelization

Step 2: File Naming - One Document Per Phase Step

:key: CRITICAL: Create SEPARATE plan documents for EACH phase step (1.1, 1.2, 2.1, etc.)

Why: Each document = one agent task | Enable true parallel execution | Clear dependencies

Naming Convention:

[feature-name]-[phase-number]-[command-keyword]-[phase-name].md

Phase Numbering:
├─ 1.1, 1.2, 1.3 = Sequential foundation phases (must run in order)
├─ 2.1, 2.1, 2.1 = Parallel implementation phases (can run simultaneously)
└─ 3.1, 3.1     = Parallel page/integration phases

Command Keywords:
├─ implement = @implement-feature (new features, APIs, components, pages)
├─ fix = @fix-bug (bug fixes, corrections, patches)
└─ refactor = @refactor-pilot then @refactor-rollout (code improvements, optimizations, restructuring)

Examples:

Phase Filename Command
1.1 notifications-1.1-implement-database.md @database-change
1.2 notifications-1.2-implement-rls.md @database-rls
2.1 notifications-2.1-implement-api-routes.md @implement-feature
2.1 notifications-2.1-implement-components.md @implement-feature
3.1 notifications-3.1-implement-admin-pages.md @implement-feature
Bug fix fk-ambiguity-fix-services.md @fix-bug
Refactor server-components-refactor.md @refactor-pilot

Location: docs/plans/

Rule: ALWAYS include phase number (1.1, 2.1, etc.) in filename AND phase name for clarity

Step 3: Create Files - One Per Phase Step

Step Action
1 Check docs/plans/ exists (create if missing)
2 Create individual plan document for each phase step (1.1, 1.2, 2.1, etc.)
3 Write files to docs/plans/[feature]-[phase]-[command]-[name].md
4 Confirm creation with full paths and phase numbers
5 Delete audit file if used (tracked from Phase 2, Step 1)
Report File Deletion
Delete if File matches docs/reports/*-report-*.md AND was used as input
Examples performance-report-2025-11-13.md, database-report-2025-11-13.md, codebase-report-2025-11-13.md
Do NOT delete Plan files, guides, other documentation
Confirm Show deleted file path to user

Step 4: Individual Phase Document Structure

:police_car_light: CRITICAL: Each phase gets its OWN document using the unified template below


Unified Plan Template

# Phase [X.X]: [Phase Name] - [Feature Name]
**Phase:** [X.X] | **Created:** [Date] | **Status:** Ready | **Estimated:** [X-Y min]
---
## Phase Info
**Phase Number:** [X.X]
**Phase Name:** [Database Schema / RLS Policies / Seed Data / API Routes / Components / Services / Pages]
**Command:** [@database-change / @database-rls / @seed-apply / @implement-feature]
**Feature:** [Overall feature name]
---
## This Phase
**What:** [2-3 sentences - what this phase accomplishes]
**Why:** [Why this phase is needed]
---
## Dependencies
**Depends on:** [None / Phase X.X complete]
**Blocks:** [Phase X.X, Phase X.X, all Phase X+]
**Can run with:** [None - sequential / List phases that can run in parallel]
---
## Context: Overall Feature
**Feature Goal:** [1-2 sentences about overall feature]
**This Phase Role:** [How this phase fits into the bigger picture]
---
## Prerequisites
**Read before implementing:**
- `docs/quick-reference.md`
- `docs/database/[relevant].md` (if database-related)
- `docs/guides/[relevant-pattern].md` (if implementation-related)
- Fetched schema: See Phase 0 (if database-related)
- Similar code: [paths] (if implementation-related)
---
## Reusing Existing
**Components:** [List existing components to use - if applicable]
**Patterns:** [List patterns from docs/guides - if applicable]
**Similar Code:** [Reference similar implementations - if applicable]
---
## Implementation Details

[FOR DATABASE PHASE 1.1:]
### Database Changes
**Schema Source:** Fetched from local Docker database

**New Tables:**
#### `table_name`
- Purpose: [What it stores]
- Pattern: Follow existing conventions (soft delete, timestamps, user tracking)
- Relationships: Links to [existing tables]
- Indexes: [Columns to index for performance]

**Modified Tables:**
#### `existing_table`
- Add fields: [Conceptual field purposes]
- Add relationships: [FK targets]

**Implementation Note:** AI fetches exact schema before implementation

[FOR RLS PHASE 1.2:]
### RLS Policies Required
**Tables:** [List tables needing RLS]

**Access Patterns:**
#### `table_name`
- SELECT: [user-scoped / role-based / public]
- INSERT: [authenticated / role-based]
- UPDATE: [owner-only / role-based]
- DELETE: [owner-only / role-based]

**Policy Specifications:**
- [Describe policy logic conceptually]
- [Reference existing similar policies]

[FOR SEED PHASE 1.3:]
### Seed Data Required
**Purpose:** [Enable local development of feature]

**Data Types:**
- [Entity type]: [N records with these characteristics]
- [Entity type]: [N records with these characteristics]

**Relationships:** [How seed data should link together]

[FOR API ROUTES:]
### API Endpoints
**GET `/api/endpoint`**
- Purpose: [Description]
- Query params: [List]
- Returns: [Data structure]
- Auth: Required/Optional
- Filter: [Logic]
- **CRITICAL:** Use slug-based references (not IDs)

**POST `/api/endpoint`**
- Purpose: [Description]
- Body: [Fields]
- Validation: [Rules]
- Auth: Required

**🔑 SLUG-BASED REFERENCES:**
- Reference options by slug: `status_slug: 'active'`
- NEVER hardcode IDs or names
- Slugs are consistent across environments

[FOR COMPONENTS:]
### Components to Create
**`ComponentName`**
- Purpose: [What it does]
- File: `src/components/[path].tsx`
- Props: [List with types]
- State: [Client/Server component]
- Pattern: [Reference similar component]

[FOR SERVICES:]
### Service Functions
**`functionName`**
- Purpose: [What it does]
- File: `src/lib/[path].ts`
- Params: [List]
- Returns: [Type]
- Pattern: [Reference similar function]

[FOR PAGES:]
### Pages to Create
**`/route/path`**
- Purpose: [What user does here]
- Pattern: [list-page / detail-page / etc.]
- Features: [Filter, search, create, edit, delete]
- Components: [List components used]
- Data: [API endpoints to fetch from]
- Auth: [Required roles]

**`/route/[id]`**
- Purpose: [Detail page function]
- Pattern: [detail-page-tabs / etc.]
- Tabs: [List tabs]
- Components: [List components]
- Data: [API endpoints]

### User Flow
1. [User action 1]
2. [System response 1]
3. [User action 2]
4. [System response 2]
---
## Security
- [ ] Auth required for protected endpoints/routes
- [ ] Input validation
- [ ] SQL injection prevention (use Supabase client)
- [ ] XSS prevention
- [ ] Role-based access (if applicable)
---
## Performance
- [ ] Efficient queries (no N+1)
- [ ] Pagination if needed
- [ ] Debounce for search
- [ ] Optimistic updates where applicable
- [ ] Code splitting if needed (pages)
- [ ] Lazy loading (pages)
---
## Accessibility
- [ ] Keyboard navigation (pages)
- [ ] Screen reader support (pages)
- [ ] Proper heading hierarchy (pages)
- [ ] Focus indicators (pages)
- [ ] Color contrast (pages)
---
## Acceptance Criteria
**Complete when:**
- [ ] [Phase-specific criterion 1]
- [ ] [Phase-specific criterion 2]
- [ ] All files created/modified
- [ ] No errors in migration/policy/seed/code
- [ ] No console errors
- [ ] No linter errors
- [ ] Functions work as specified
- [ ] Security checks pass
- [ ] Can proceed to next phase (if applicable)
*Phase Plan - Generated by @plan-feature*

Document Rules:

  • Each phase = separate file
  • Include phase number in filename AND header
  • Keep <300 lines per document
  • Make each document self-contained
  • Include dependencies/blocks clearly
  • Specify command to run
  • Use only sections relevant to phase type

Plan Quality Checklist

Before finalizing:

  • Database schema fetched (Phase 0 - ALWAYS, even if no db changes)
  • Codebase discovery completed
  • Existing components/patterns/similar pages referenced
  • NO code/SQL/hardcoded schemas included
  • SEPARATE document created for EACH phase step (1.1, 1.2, 2.1, etc.)
  • Phase number included in filename ([feature]-[phase]-[command]-[name].md)
  • Each document is self-contained (agent can work from single doc)
  • Dependencies/blocks clearly specified (in each document)
  • Each document <300 lines
  • Scannable format (tables/bullets, 1-2 sentences max)
  • Conceptual descriptions only (AI fetches implementation details)
  • All files created in docs/plans/
  • Report file deleted if used as input (from docs/reports/)
  • Optimistic update strategy specified (for list/table operations)

Best Practices

:white_check_mark: DO :cross_mark: DON’T
Fetch database schema first (Phase 0) Hardcode field types/names in plans
Discover existing components/patterns first Skip codebase discovery
Reference similar pages/guides Duplicate existing pattern docs
Default to established patterns Include any code/SQL
Create SEPARATE document for EACH phase step Create one large document with all phases
Include phase number in filename (1.1, 2.1, etc.) Use generic filenames without phase numbers
Make each document self-contained Assume agent reads multiple documents
Clearly specify dependencies/blocks Leave phase relationships unclear
Ask specific structured questions Create prose-heavy docs
Use tables/bullets (AI-scannable) Exceed 300 lines per document
Make all decisions in plan Leave decisions ambiguous
Create files in docs/plans/ Skip acceptance criteria
Delete report file after using it Leave report files (from audits)
Include command keyword in filename (-implement, -fix, -refactor) Use generic plan names without keywords
Keep descriptions 1-2 sentences max Add verbose examples
Specify optimistic updates for lists/tables Plan to refetch entire lists
Use phase numbering: 1.1, 1.2 (sequential), 2.1, 2.1 (parallel) Use generic Phase 1, Phase 2 without numbers
Map phases to specific commands (@database-change, @database-rls, @implement-feature) Leave execution strategy ambiguous
Separate database/RLS/seed into individual docs Combine foundation phases in one document
Identify parallel work explicitly (same phase numbers = parallel) Assume user knows what can run in parallel
Enable multi-agent execution (one doc = one task) Create documents requiring coordination

Success Metrics

Plan successful when:

  • Database schema fetched (Phase 0 - ALWAYS)
  • Codebase discovery completed
  • Existing components/patterns/similar pages identified
  • Questions focused on unique requirements
  • User confirmed matches vision
  • All requirements clear
  • NO code/SQL/hardcoded schemas in plan
  • SEPARATE document created for EACH phase step (1.1, 1.2, 2.1, etc.)
  • Each document is self-contained (agent needs only that one doc)
  • Phase number in filename ([feature]-[phase]-[command]-[name].md)
  • Dependencies/blocks clearly specified (in each document)
  • Plans are AI-optimized (tables/bullets, 1-2 sentences)
  • Each document <300 lines
  • Agent can start from single document without questions
  • Decisions documented
  • Multiple files created in docs/plans/ (one per phase)
  • Report file deleted if one was used as input (from docs/reports/)
  • Optimistic update strategy defined (for features with list/table updates)
  • Multi-agent execution enabled (parallel phases identified)

Related Commands

After Planning:

  • @database-change (Phase 1.1: database schema changes)
  • @database-rls (Phase 1.2: RLS policies)
  • @seed-apply (Phase 1.3: test data)
  • @implement-feature (Phase 2+: implements plan modules)

Note: User decides when to commit/deploy changes


Global Cursor Command - Works across all projects

6 Likes

this is fairly fluid - I usually make a tweak to my commands once or twice a week to refine them further as I come across minor issues.

The biggest feature that sets CURSOR apart from its competitors is the tools integrated directly into it. Tools like grepping, rules, websearching, and others take Cursor far beyond its rivals, because they allow you to manage your coding experience much more effectively. That’s why I haven’t seen an IDE better than Cursor yet. I keep trying many newly released IDEs, but none of them give me the results I’m looking for.

4 Likes