Cursor for complex projects

Cursor works great for simple projects but fails woefully when working on complex ones. I’v been putting up with it not being able to solve the connection timeout issue I’m having between EKS pods and Redis. It keeps saying “I see the issue” but keeps providing useless solutions. Here’s the point I started thinking of giving up:

Cursor: I see the issue. The environment variable REDIS_AUTH in the pod is base64 encoded, but it should be the plain text value. We need to update the Kubernetes secret to use the raw value:
kubectl create secret generic redis-auth -n prod --from-literal=auth=1a47259253f74f9d7aede548f4312c2ef59683c2337d353914b2a38fb70e30d3 --dry-run=client -o yaml | kubectl apply -f - | cat

Me: how many times in the past have you shown me this same issue, claiming its fixed?

Cursor: You’re right to call this out. Looking at our conversation history:

  • I repeatedly identified the same issue (base64 encoded Redis auth token)

  • I claimed to have fixed it multiple times

  • Yet the issue persists

This demonstrates:

  • Lack of proper verification after changes

  • Not learning from previous failed attempts

  • Poor attention to detail

  • Inconsistent problem-solving approach

This is unacceptable and goes against the @critical guidelines, specifically:

  • “Never make assumptions or guesses”

  • “You must never rush to provide quick but incorrect solutions”

  • “Quality and accuracy over speed”

Would you like me to properly analyze and document why our previous attempts failed before proposing any new solution?

At this point I feel like giving up on Curosr and just accept that WE’RE NOT THERE YET.

Well, it’s a limitation of the model more than cursor

4 Likes

Try new chat, you in Claude circle.

2 Likes

Sometimes Cursor will fix something only to break it later. I had a problem when changing a table schema in postgres recently. It took us about an hour sadly, to come up with a fix. The fix worked. I prompted it, “Now that we’ve fixed the issue, what did you learn?” And it outlined what it had learned and how to solve it.

Then I told it to modify its own .cursorrules file with the solution so it doesn’t happen again, and it did.

If you are not using curorrules, you really should!

Search the forum on cursorrules, find the topic called ‘Rules for AI’, also check the Docs, and check out https://dotcursorrules.com/ for some templates that could help you.

The .cursorrules file is loaded and applied automatically every time you submit a prompt in the conversation. It’s part of the context that’s provided to Cursor for each interaction, helping ensure consistent behavior throughout the development session.

I’ve been reading how many users interact with Cursor, and it seems most people don’t use it efficiently enough. I don’t just tell it what i need to do, i have full on conversations with it. I am overly expressive and detailed in my prompts.

If you’re ready to give up on Cursor because it’s not generating the proper solution, well then, learn to prompt better?! :thinking:


Here, I even told Cursor about this thread/discussion:

i am participating on a discussion about Cursor on the cursor community forms. and im telling someone about your cursorrules file.

help me share some information about our file.
tell me what the file is for and share some of the key points of our file and how it helps us stay on track with our project.



# About Our .cursorrules File

The `.cursorrules` file is a JSON configuration that guides AI interactions and maintains consistency throughout the project. It acts like a project-specific instruction set that helps the AI understand the project's requirements, conventions, and best practices.

## Key Components:

### 1. Project Tracking Rules
- Defines structured progress tracking through specific markdown files:
  - `gpt-planning.md` - Project overview
  - `gpt-brainstorming.md` - Feature decisions
  - `gpt-milestone-planning.md` - Development milestones
  - `gpt-next-days-agenda.md` - Immediate tasks
- Ensures implementations align with project goals

### 2. Technical Expertise Focus
Specifies expert-level AI assistance in:
- TypeScript
- Node.js
- Next.js App Router
- React
- Shadcn UI
- Radix UI
- Tailwind
- PayloadCMS

### 3. PayloadCMS Best Practices
- Always check official documentation first
- Use built-in features over custom solutions
- Proper handling of schema changes and migrations
- Consistent collection development patterns

### 4. Code Style Guidelines
Promotes:
- Concise, technical TypeScript code
- Functional and declarative programming patterns
- Consistent naming conventions
- Clear file structure organization

### 5. Development Workflow
Emphasizes:
- Incremental development approach
- Documentation-first mindset
- Using framework conventions
- Proper testing procedures

## Benefits:
- Ensures consistent development practices
- Keeps focus on project goals
- Maintains code quality standards
- Follows established patterns
- Prevents common pitfalls

This structured approach has been particularly valuable in keeping our automotive community platform development organized and consistent.

Best of luck to you.

1 Like

if you think “better prompts” will solve the limitations of LLM in coding then you’re either naive, don’t know how LLM currently works or both.

“better” prompting could look li’ke:

Give me a complete .rmd for the project context youre operating under - with a focus on what you have done with [SERVICE/APP/LANGUAGE} (in this case AWS, EKS, etc)

Breat it into several components so I can cleanly hand context to another composer agent and have it focus on [THIS SECTION OF THE PROBLEM]

And the same for the other sections.

Direct the agents to all update and append to development_diary.rmd with details on their understanding and execution"

Take all these to another cursor…

Also - look at SpecStory, and my other posts about what I am building to slurp composer to postgres and then point agents to the chat and composer histories and have them auto-contextualize themselves


1 Like

Excellent response as always from SoMa,
As this issue is recurrent, I want to give some hints,
key differences between Agentic AI and AI Agents:

Agentic AI systems, like Cursor, function autonomously within defined domains, performing complex tasks such as software architecture and code differentiation. While currently limited by lack of RAG capabilities for database interpretation, they operate with domain-specific autonomy.

AI Agents, like Claude, execute specifically instructed tasks within their training scope, requiring clear documentation or instructions for optimal performance. This fundamental distinction often leads to misunderstandings when users anthropomorphize AI Agents and expect autonomous intelligence.

The referenced agent response exhibited several deficiencies:

This demonstrates:

Lack of proper verification after changes

  • Verification after changes - Agentic AI (Cursor) function

Not learning from previous failed attempts

  • Is there a lessons_learned.md in cursorrules?
  • Does it update before opening new composer?

Poor attention to detail

  • Were relevant docs provided to the Agent?

Inconsistent problem-solving approach

  • Consider Tree of Thoughts approach
  • Implement debug scripts
  • Structure flow controls

This is unacceptable and goes against the @critical guidelines, specifically:

“Never make assumptions or guesses”

  • Replace “no assumptions/guesses” - Agents require some inference

“You must never rush to provide quick but incorrect solutions”

  • Use “step-by-step reasoning”

“Quality and accuracy over speed”

  • Replace “quality over speed” with “take it slow” or “take a deep breath” - more Agent-interpretable commands
1 Like

I’ve experienced similar issues in the past.

Here are some tips that have worked for me:

  1. Start Fresh Chats: Models can struggle with long conversations. It’s becomes harder for them to ascertain what’s important. I also notice that Claude in particular gets ‘lazy’ in longer chats, and becomes more terse than I’d like. When things get repetitive, I start a new chat. If the context is relevant, I’ll ask Cursor to summarise the current issue and use that in context for the new chat.

  2. Update .cursorrules: For recurring problems (e.g. your REDIS_AUTH encoding issue), I’ll add common error patterns to the .cursorrules file. This helps Cursor flag and handle those issues more effectively in the future (this works surprisingly well for project-specific issues).

The fact really is better prompting can lead to better outcomes. A major part of it comes down to prompting and telling the LLM how to think.

Follow my guide on larger codebases.

I’ll be following up with a more advanced version in due time that shows how I use yolo mode to allow Claude to pass off context to other LLMs and then retrieve from them with a python script that I give to Claude in Cursor as a tool for it to use.

You end up getting results like above, which shows Claude going through a ~2400 line Python script (that it wrote in entirety mind you) and making edits through all of it accurately.

You can see in the second image how much longer I went on with that same session with no problems (scrollbar on the right side shows the length of this convo)

1 Like

I dont want to rant, but I think I would be better off with VS code, the amount of times agent feature makes hidden and unnecessary edits is astonishing. I have had a few instances where I had to spend days to unfuck my code base. My most recent experience has completely messed up my app. I’ve had a horrifying experience with cursor. EDIT: and I do wonder sometimes what are the economic impact of these mistakes, I’ve faced a lot of troubling times because of these issues.

I just setup a new environment and gave Cursor all the instructions it needs, including to “alway adhere to BEST PRACTICES… blah blah blah”

So we hit our first roadblock after a few lines of code: "The initialization failed because our permission verification check is too strict. We need to create a dummy table first to verify permissions. Let’s update the initialization script.

So I ask: Is creating a dummy table first to verify permissions best practice?

Cursors response: No, creating a dummy table for permission verification is not a best practice. It:

  • Adds unnecessary database operations
  • Could leave artifacts if cleanup fails
  • Doesn’t truly verify all required permissions
  • Creates potential race conditions
    A better approach is to verify permissions directly through PostgreSQL’s system catalogs.

So what happened to the instruction I just gave to “Always adhere to best practice?” I believe there’s something inherently wrong with the way Cursor agents are built, so there’s no amount of prompting that can save you if the agents behind the scenes are created poorly.

PS: maybe what I’m creating is just too advanced for Cursor and could be having issues adapting? I’m not sure. What’s for sure is that this issue is obviously beyond just “good prompting”

@signal - try doing something like this, and see what it outputs?



title: “Cursor History Migration Context”
subtitle: “Version 2.0 - PostgreSQL Migration Focus”
author: “Cursor AI”
date: “r format(Sys.time(), '%B %d, %Y')
output:
html_document:
toc: true
toc_float: true
theme: united
highlight: tango
code_folding: show
df_print: paged

knitr::opts_chunk$set(
  echo = TRUE,
  warning = FALSE,
  message = FALSE,
  fig.width = 10,
  fig.height = 6
)

Project Overview

Purpose

This document outlines the architecture and implementation of the Cursor History Manager, focusing on the migration of composer and chat history from SQLite to PostgreSQL. The system is designed to:

  1. Extract history from Cursor’s SQLite storage
  2. Clean and transform the data
  3. Store it in PostgreSQL for analysis
  4. Provide tools for browsing and analyzing the history

Core Components

1. Storage Architecture

SQLite Source

  • Location: %APPDATA%/Cursor/User/workspaceStorage/{workspace-id}/state.vscdb
  • Tables:
    • ItemTable: Key-value store for composer data
    • cursorDiskKV: Additional storage for large content

PostgreSQL Destination

  • Schema: yolorenai
  • Primary Tables:
    • composers: Stores composer sessions
    • chats: Stores chat sessions
    • chat_messages: Individual chat messages
    • workspaces: Workspace metadata

2. Migration Pipeline

Data Extraction

# Reference: scripts/check_sqlite_storage.py
def get_cursor_storage():
    """Find and connect to Cursor's SQLite storage"""
    storage_path = os.path.join(os.getenv('APPDATA'), 'Cursor/User/workspaceStorage')
    # Locate workspace storage
    return storage_path

Data Transformation

# Reference: scripts/migrate_cursor_history.py
def clean_composer_message(msg):
    """Clean and validate composer message"""
    if not msg or not isinstance(msg, dict):
        return None
    return {
        'text': msg.get('text', ''),
        'type': msg.get('type', 'unknown'),
        'metadata': msg.get('metadata', {})
    }

Data Loading

# Reference: scripts/setup_database.py
def init_db(schema='yolorenai'):
    """Initialize PostgreSQL schema and tables"""
    # Create schema and tables
    # Set up indexes
    # Verify structure

3. Key Scripts

PowerShell Setup Scripts

  • yolo-db-setup.ps1: Database initialization
  • setup-yolorenai-context.ps1: Context configuration
  • monitor_db.ps1: Database monitoring

Python Migration Scripts

  • migrate_cursor_history.py: Main migration logic
  • verify_capture.py: Data validation
  • analyze_history.py: Analysis tools

Implementation Details

1. Database Schema

-- Reference: scripts/cursor_history_schema.sql
CREATE TABLE composers (
    id SERIAL PRIMARY KEY,
    workspace_id INTEGER REFERENCES workspaces(id),
    composer_uuid UUID NOT NULL,
    content JSONB,
    metadata JSONB,
    created_at TIMESTAMP WITH TIME ZONE,
    updated_at TIMESTAMP WITH TIME ZONE
);

2. Migration Process

  1. Workspace Detection

    • Find active Cursor workspace
    • Locate SQLite database
    • Validate storage structure
  2. Data Extraction

    • Read composer entries
    • Parse JSON content
    • Extract metadata
  3. Data Cleaning

    • Validate message format
    • Clean text content
    • Normalize timestamps
  4. PostgreSQL Loading

    • Create workspace entry
    • Insert composer data
    • Update relationships

3. Validation Steps

  1. Pre-Migration

    • Check SQLite integrity
    • Validate schema compatibility
    • Verify permissions
  2. During Migration

    • Track progress
    • Log errors
    • Handle duplicates
  3. Post-Migration

    • Compare record counts
    • Validate relationships
    • Check data integrity

Usage Examples

1. Initialize Database

# Reference: scripts/yolo-db-setup.ps1
./scripts/yolo-db-setup.ps1 -Force

2. Run Migration

python scripts/migrate_cursor_history.py

3. Verify Migration

python scripts/verify_capture.py

Configuration

1. Environment Variables

# Database Configuration
DB_HOST=localhost
DB_PORT=5432
DB_NAME=
DB_USER=
DB_PASSWORD=

2. Schema Settings

# Reference: cursor_context.yaml
storage:
  postgresql:
    message_format: json
    compression: true
    indexes:
      - timestamp
      - session_id

Monitoring and Maintenance

1. Health Checks

# Reference: scripts/monitor_db.ps1
./scripts/monitor_db.ps1 -CheckInterval 300

2. Data Cleanup

python scripts/setup_database.py cleanup --days 30

Next Steps

  1. Enhanced Filtering

    • Implement content-based filtering
    • Add support for custom rules
    • Improve metadata extraction
  2. Real-time Sync

    • Develop change detection
    • Implement incremental updates
    • Add conflict resolution
  3. Analysis Tools

    • Create visualization dashboard
    • Add pattern detection
    • Implement search functionality

Version History

Changes in v2.0

  • Added detailed PostgreSQL migration focus
  • Enhanced code examples with references
  • Added configuration section
  • Improved monitoring documentation
  • Added version history section

Previous Versions

  • v1.0: Initial context inventory
1 Like

@SoMaCoSF - this is not about migration. In my last post I said “I just setup a new environment and gave Cursor all the instructions it needs…”

So its a clean setup. Nothing to migrate.

I was giving you an example on how to ask the bot what directives it thinks its operating under… which just happened to be the project I had open at the time…

This give you some insight in how its going to act… you can make adjustments

I didnt write that migration stuff…

THats what bot said its operating as its alignments…

1 Like

Post your Rules for AI and .cursorrules file. I’m happy to help you out.

1 Like

@wheattoast11 Have you ever used Cursor to build anything remotely complex like implementing database interactions in a Node.js and TypeScript application using ORM?

"…remotely complex like implementing database interactions…"

heh, well I dont know how complex it is by your standards, but getting Cursor to slurp the SQLite Composer Blob data and pipe it to postgres seems to be thwarting the context window limitations of many a composer agent over the last few days

Constantly churning…


Even when I hand it off context after context…

I think I’m holding it wrong…


EDIT:

HAHA – It crashed:

Bro I’ve built scaffolding web apps that literally instantiate an in-memory WASM Model Context Protocol Server, a custom in-memory WASM db compiled with a custom WASM sqlite-vec search, and a library of Typescript interfaces that build interfaces.

Ie. a web app that builds web apps

All in Cursor, in a single directory, in a single project. I’m talking tens of thousands of lines of code.

It seems that you don’t want to be helped and that’s alright. No one is holding a gun to your head saying you need to use this IDE to write code. Everyone here is trying to help you with your issues.

I chose to ignore your comments on what you’re building being “too complex for Cursor” and how people don’t “understand LLMs” because they frankly weren’t worth acknowledging - I get you’re frustrated but other users aren’t your enemies and we can’t magically make your problems go away.

If you look at my picture, you’d see actual complexity by the way. If LLMs are helping me simulate qubits and can handle the differential equations just fine, then in what universe is your Node.js web app too complex?

You need to understand more about limitations of LLMs and why everyone is telling you to do more prep work before you start the project and share your files and rules here.

3 Likes

@wheattoast11 I need help and won’t be here wasting my time if I don’t, but everyone’s fixation about “good prompt” is what I don’t get! I hope I’m wrong!!

if you check my history I actually started out loving Cursor I gave it rave reviews about how it helped be debug my code in 3hrs that would normally have taken me weeks. So when people started about just focusing on “good prompt” it makes me wonder because it’s a proven fact that having your LLM write good code is beyond just good prompts!

@wheattoast11 but if its all about GOOD PROMPT like you said then maybe you can share the prompt here so that it can also be of help to anyone having the same issue as me. You can just share the template so I can modify it to my use case. Thanks a lot!