priorities bro
Rules not working 100% is a suggestion, not a rule.
Have you tried selecting other models or Auto? That sounds like a network-connection problem to me – possibly a model being overloaded on their end.
I dumped cursor back in November when it made no financial sense to keep using it. There was a similar event - app updates and model updates. They introduced new pricing for premium requests.
2 days ago I thought I would give it a shot again, figuring some things would be worked out. Bad timing I guess LOL. Big issue for me was keeping package versioning in context. I imagined that at some point a simple RAG solution could be implemented per account, per project, which swallowed up correct version documentation, examples etc.
To get maximum benefit here’s what remains true. Good programmers with strong understanding of SOLID, DRY, KISS principals benefit most from AI programming. No surprise there I guess. AI can do grunt work with strong sandboxing. On top of this is a strong understanding of prompting to ensure these principals are consistently followed. Back in November we used .md files for prompting with principals/architecture/stack info rather than relying on rules. The idea of letting AI architect and implement a maintainable clean project still seems a bit off in the future.
I don’t claim to be some amazing developer. I’m not. But the principals remain the most important thing when using AI. I don’t feel like AI is necessarily making programming easier. It’s making it faster in some ways but way more effort is required in consciously communicating and designing clear instructions. Of course you can use AI to help you do this too. With that in mind you can follow these principals with prompts. Atomize your work. Broad stroke prompts make ■■■■■■ unmaintainable projects.
I was really hoping I could say (for example) “make me a twitter clone with these differentiating features” but yeah no go. My initial prompt with clear rules, PRD and development plan did pretty well. But the first simple change I prompted for destroyed it and it could not recover.
Anyway bad timing for me switching back to cursor lol. I spent like 25 bucks first day using “Max” model. Gonna go back to my intense guiderails and abandon “vibe” programming for now.
I’ve been treating it as a very hard-working very over-eager junior developer just starting out in their first job and ready to take over the world. It’s a different sort of hand-holding, and it needs lots of oversight and seasoned guidance and direction. Otherwise it goes off the rails, down the rabbit holes, and makes terrible mistakes in both strategy and tactics.
Glad I was never that young.
Completely agree with the other posters. I have been using Cursor for the last couple months – loved it and did the pro subscription. Now the last few days Cursor is completely brain dead. It says it cannot see the entire files even though it is shared in the context. Since it can’t see it the stupid AI is guessing and hacking. Creating more issues than it solves. I loved this app even suggested my company consider it over co-pilot – I took that back today. Cursor team you had a great product but you screwed it up. I even added the max model thinking it would help. Al it did was cost me more money to screw up more code. What other alternatives are people considering? I used CoPilot but found it’s merging of changes was pretty ■■■■■■. Given how bad Cursor has become I may go back. Any other competitors out there?
This problem with the model saying it can’t see the entire file … it might not be Cursor. I had this happening all day today in JetBrains AI Assistant, particularly with the Claude 3.7 model.
At this stage it is running in “casino” mode.
it is now too slow.
In my experience, GitHub Copilot is worse than Cursor. I’ve paid for a GitHub Copilot subscription and am using its agent in VS Code Insiders, but it has significant issues. Here are some of them:
- Because it’s still in preview, you cannot upload images.
- It frequently hits rate limit issues.
- If you change the model, the new model cannot read what was written in the same or previous responses.
- It forget what it wrote in the previous response!
I’ve been struggling to fix several issues with GitHub Copilot, so I tried using Cursor (the free version), and it’s amazing.
I think Cursor has become very slow to the point that the slow pool is basically unuseable. Usage based then burns 20 bucks in a day.
This kills a lot of the appeal.
This does not make sense as a value proposition. I cannot every time I want to engaged with my code wait for a minute or more.
They need to figure this out. Slow pool can be max 10 sec.
Are there any alternatives which work well to use Sonnet 3.7 in thinking mode? Is this Windsurf pro subscription for 60 bucks fast in terms of workflow with agentic mode turned on?
Please share more details about the problems you’re encountering. What languages do you primarily code in? Is it Python, or something else? Which large language models do you typically use with Cursor? What kind of project structure do you usually follow? Do you often have many folders open simultaneously in the same workspace?
Provide some specifics about your work, and I’m confident we can identify areas for improvement. I’ve used Cursor with Claude Sonnet 3.5 for the past five months for all my projects. While it’s not perfect, my productivity has increased dramatically – at least 100 times compared to when I was coding manually. I’m sure there are enhancements you can make that will improve your experience as well.
The point is not that cursor isn’t helpful vs. the old paradigm. The point is that since the introduction of the latest releases and the thinking model from Claude the slow pool has gotten so bad that the application is not as good as before.
Using Clause Sonnet 3.7 caused an immediate loss of significant functionality, so I stuck with 3.5. I think that is still the best option for Cursor.
It’s going to take some time before Cursor integrates 3.7 Sonnet with their system, and even then, it’s going to have to be a pretty major improvement to make it worth the higher cost of tokens.
Something I found really useful recently is to often tell Cursor to give you feedback, but don’t make any code changes. Then, after your feedback, ask it to now have a look at what you just told me. Is there anything you left out? Is there anything that could be done better? Then you are more likely to get a edit that is going to be functional.
This results in a lot less errors and increases its probability of figuring out a particularly sticky problem as it’s a kind of a pseudo thinking model that you’re enforcing.
I also frequently ask the model to make the changes that we’ve been discussing with a minimum of code changes or ‘really tight coding’ and that usually stops situations where extra code gets removed when you don’t want it to.
I just spent 5 bucks to just revert all changes because of hallucination. Prompting was spot-on though, i just let it made it’s thing for 2-3 prompts, trusted, then tried solving errors and realise it’s not getting anywhere. Been working with cursor for 6-8 months. This is the worst it’s been all this time.
maybe you need to 0.45 and 3.5
As a Pro user, to me the main value in Cursor (besides the discounted tokens vs API, even w usage-based pricing) is Curor’s local models that handle long context compression and applying changes. These I have found to be fairly reliable, as long as I am diligent about good practices (see below).
The alternatives are just not as capable for the price–either they don’t have the good apply/agent models, or they don’t effectively compress the context, burning tokens like crazy.
I should note, however, that I have downgraded to 0.45 and I am sticking there until things settle on the newer versions.
What I do is:
- At the start of every chat, explicitly @ the specific rules I want to use (I find that Cursor doesn’t apply them well)
- Use the Cl*ne memory-bank prompt as a rule
- Ask for a comprehensive design doc first, then give the agent the green light to code.
- If the iteration on the design doc is too long, start a new chat and @ the design doc
- Frequently ask for updates on the design doc, a progress/status file, and related system-wide design docs. Ask it to include things it’s tried and why they didn’t work.
- Frequently start new chats with the updated design docs. All of the LLMs are subject to “forgetting” with very long contexts.
- You can ask it to read a large codebase initially, but you can’t use that for coding, only design, due to LLM forgetting. With strong types and linter errors, you can catch a lot of dependencies that might not be in the context directly.
- Excessively long contexts increase your likelihood of very high latency and failed requests.
- Your results will only be as good as your prompt AND your ability to evaluate and correct the results. You still need to be a good engineer with a sharp eye.
Yes I agree it cannot get anything right it seems anymore. I tell it make one simple change by moving a function to a new tab have a very modular code base and overall great coding practice in my project, Cursor Agent then proceeds to create 3400 lined of new code in 50 tool calls..WTF???
every second or very 3rd send and I’m always getting:
" Connection failed. If the problem persists, please check your internet connection or VPN (Request ID: 81c6e1ec-8cc1-43a0-8188-2189c5b8d63e)". It’s really annoying you cannot reliably even send request even thought I have reliable fiber optic connection (200Mb/30Mb). Any python script that have pytorch I also have to execute in external terminal or in warp.app because also having some linking error in cursor. I already unsubscribed but I cannot even reliably to use it until end of my subscription
It keeps hanging on “Generating”
After closing whole progrma and restarting works once and again.
It became simply unworkable.
Pity.
What is going on with window management in 48? I cant change the sidebar? Like why are they trying to lock down settings as if they made VS code? This is a GPT wrapper. Start acting like one. I don’t even know wtf my close an minimise buttons have gone on mac?