As a long-time Cursor user, I’m deeply disappointed with the recent performance of the 3.7 “thinking” model. The code generation quality has noticeably deteriorated, creating more problems than it solves:
Rule Ignorance: The model frequently disregards project-specific rules and coding conventions, requiring constant manual corrections.
Framework Confusion: In the React projects, I received Svelte component ! WTF, This isn’t just wrong - it’s fundamentally broken.
Context Blindness: The model often fails to maintain consistency with existing implementation patterns, even when explicitly defined.
We adopted AI coding tools to save development time, not to waste hours fixing nonsensical outputs. This regression severely impacts productivity and undermines trust in the tool.
@CosPie You’re a long-time Cursor user but joined 10h ago, the exact time of your post.. I’ve got the opposite “problem”, it always read rules and sometimes I need to tell it not to, you may have problems with your descriptions as its needed a good description and takes time to enhance, also my rules are in xml to further follow and verify each step, take my description for python guidelines which always works: Python Style Guide, MUST activate when interacting with python code.
Add this to your rules for AI:
@drahn “never” contradicts “update” instruction, ask AI to refine your rules
Yes, Same issues here. Rule Ignorance, Context Blindness and it made my page into components with its own css. I use a shadui theme sitewide that it ignored and made its own.
My experience was good before, but in the last week or two it has become really stupid compared to before, to the point where a react project appeared but gave me svelte code.
I have been using cursor for half a year, but never joined the cursor forum discussion until now I can no longer stand its stupidity. BTW , my cursor forum account and the paid account are not bound.
I have been using cursor-auto-rules-agile-workflow as a requirement development template,the workflow like PRD -> Arch -> Epic -> Story , and added the react rule like that
---
description: react component guideline
globs: *.tsx
alwaysApply: false
---
- When creating a React component, ensure it has no required props (or provide default values for all props)
- Use valtio as state management, Action functions defined in module (it is better for code splitting)
- Prefer use named export function
- You may use the following libraries in your React code:
- antd (UI components)
- recharts (for charts)
- @formkit/auto-animate (for animations)
- use `useErrorBoundary` if the UI component has a network request or an uncatchable error may occur
It worked fine at first, but recently it started to work less intelligently. I don’t know if it’s because of some limitation of the cursor, maybe it’s for cost or something else.
Then welcome! you’ll find a treasure in this forum as many of us help each other, after reviewing that repo, I would remove this line from emoji rules( - Maintain professional tone while surprising users with clever choices) that can cause many issues as you usually don’t want surprises, being that the workflow seems linear, can you find the point inside Story or PRD where it evaluated using svelte? its really concerning as LLMs work a lot by following patterns
other tips: ensure in your technology stack there’s React, change your react description to React Component Guidelines, MUST activate when interacting with TypeScript code.
ensure that your initial context don’t exceeds 2k lines including rules
ultimately try the same prompt with a known-to-work version like 0.45.14 to check if its a bug inside Cursor context management(but you’ll need to remove alwaysApply inside mdc files as its not supported in that version)
you’re fighting the edit model on this one - the coding model uses edits to tell the edit model what to do, and the latter one never sees your rules, and knows to remove comments “for it” when it sees them later… but isn’t good at knowiong what “for it” means.