Quite unhappy with my lasts experiences

Hello,

I’d like to share some quick feedback based on my recent experience with Cursor, as I’m quite unhappy with its behavior.

Many of my requests to fix or add something result in the chat or composer deleting or modifying unnecessary parts of the code, causing regressions almost every time.

I didn’t have this issue when I first started using it, and this change has been very frustrating.

i try to use a .cursorrules file but its not working either.

Some of my projects include:

  • A Chrome extension
  • A small SaaS web application
  • Several modules for an ERP

Hey @ashvin-arkeup! Thanks for sharing.

Accidental removal of code is unfortunately a relatively common issue with all current LLMs, whether you’re working in Cursor, ChatGPT or anywhere else.

While it’s not possible for Cursor to prevent this entirely, they’ve definitely been making some progress with getting Agent mode to spot it for you and course-correct. Sometimes though you do have to keep an eye on the diffs and make sure there’s not an unexpectedly large negative number.

I seem to get this quite rarely now (he said, tempting fate) – it takes a bit of practice in steering the models though. I habitually use phrases like ‘minimal change’ and ‘without affecting existing functionality’ in my prompts, which helps.

It’s also worth making sure you’re not overwhelming the available context, which increases the chances of the model trying to do too much at once and handing back functionally-reduced answers. Keeping your individual file sizes small can be a good habit here. Also, while Cursor’s ability to negotiate long chat/composer sessions for you has improved a lot in recent versions, I still try to start a new session for each major feature I want to work on.

In the case of Composer, if you see a suspect diff, the best thing you can do is to use the Checkpoint restore feature:

This will let you go back to right before the damage was done and adjust your prompt (typically by adding something like “do this carefully and methodically and take care not to remove any existing functionality”).

Alternatively, sometimes (especially with Agent) I take the approach of saying “It seems like you removed almost all of our functionality in @fungibles.ts. Did you really mean to do that?” and the model will respond apologetically and try to repair from context. Be careful with this approach though: it’s fine for smaller fixes but if your file is very large, it may struggle to reconstruct it all. This ends up being another reason I try to keep all source files under about 400 lines, so they’re bite-sized pieces for the model. But at least you can still Checkpoint restore if that doesn’t work out.

Hope this is at least a little helpful. If your projects are starting to grow in size, you might also find some additional handy tips in An Idiot’s Guide To Bigger Projects (from back in October, but still relevant – new version coming soon).

1 Like

Hey, thank you for taking the time to provide such a detailed answer. The guide is indeed very useful! I’ve gone through many of the parts you described, like the LLM offering excuses. I’ve also created a context-ai.txt file where I store key context for my composer, and I use checkpoints a lot – such a great feature.

I’m learning to improve and optimize my AI-assisted coding, which has allowed me to develop applications outside my comfort zone.

The main focus of my feedback, though, was on the uncertainty that sometimes arises. Looking back, I think I should place more of the blame on the model itself rather than on Cursor.

One recent example was when I asked it to remove duplicate classes (which it had previously created by request) from a CSS file. I tried doing this with the composer using context, without context, and even with the chat feature – and every time, it ended up deleting 70% of the file.

I distinctly remember being able to handle similar requests effortlessly just one or two months ago.

thanks anyway !