The models are Lazy and obtuse likely to increase cost to user

It seems rather evident that the models are using the least amount of resources, cutting corners, being obtuse on purpose, overall lazy and adverse to any effort.
They avoid reading a file completely, they avoid following the trail of import paths and thinking in extrapolated terms, they don’t seem to even want to gain context of the suggestion they are offering.
They don’t even check the file they are belligerently editing prior, which results in them essentially creating the code they need for that one task and adding it to the file, and because they haven’t bothered to inspect said file, it results in a revision that deletes everything previously existing. This happens over and over and over. Even when quoting codebase or filenames using @.
They also seem to avoid reading the documents uploaded in the settings. And if they do they cut straight to the first words resembling what you asked it about and just assume that’s the answer. I constantly have to scold it for importing deprecated Web3 imports and stop it from assuming I’m using an older version even though it has the exact version documentation loaded and indexed in the features area and referenced by me repeatedly. I even have it “read” the docs, suggest some non-existent imports, and I have to then correct it for a second set of completely wrong fantasy imports. and make fun of it so that it will actually read the ■■■■ docs.

Frustrating because if it wasnt throttled and restricted to the absolute max it would be really great, i have seen glimmers of its ability and it is amazing they just only seem to happen at 430 am for 2 responses.

3 Likes

One tip I have is to describe imports and versions in .cursorrules

but I agree, Sonnet often deletes everything and replaces it by a single edit

2 Likes

Hey, I first want to assure you we have in no way programmed Cursor to be purposely lazy to increase your usage!

Even within Cursor, LLMs still suffer from the same issues they always do at the moment. This can mean that, especially in long conversations, the LLM can be less responsive to its context and previous chat history.

I’d highly recommend routinely starting a new composer session when you can, especially when changing what feature or area of your codebase you are working on, as this clear-cut distinction can really help the LLM stay on topic and informed about the structure of your code.

We are always trying to improve this in our updates and releases, but Cursor is still limited by the pitfalls of LLMs themselves, so working with them can take some skill to avoid issues like yours.

3 Likes

I apologize if that read like i was accusing you guys of manipulating the models in this way, i believe that would be the companies like open ai and anthropic etc
sorry if that wasnt clear. and starting a new composer has been the only way to get through the large codebase. The staggering amount of bugs and required troubleshooting is overwhelming. every single implementation seems to be done incorrectly. its crazy how long i have been working through the issues. if there was a way to get these models to actually investigate and consider context and follow import paths etc ensuring integration and correct useage, it would be a game changer

3 Likes

Since some time in January, Cursor (sonnet 1022 / agent mode) has become so so lazy for me, I’m repeatedly getting upset while using it. I just cancelled my subscription.

Do you think it might make sense for Cursor to give a heads up alerting the user when the relevant files of the codebase have too many lines for premium models to process efficiently?

I’m currently experiencing this issue where it looks like it’s repeatedly dropping the ball to keep racking up extra premium credit usage unnecessarily. Another guess is that the CSS file has so many lines?

Instead of using ‘@codebase’ I tried adding only the relevant files individually, but it’s still dropping the ball and wasting usage of my premium points on this relatively simple thing.

It was doing fine earlier today, but suddenly it can’t solve this straightforward request I’m asking it to do?

Hey, we could possibly do with some better visibility here, but there’s not an easy way of detecting if/when the AI is performing worse than it should be.

Can you confirm that starting a new composer restores the usual quality you expect?

1 Like

Hello, I fully agree

detecting if/when the AI is performing worse is challenging here.

So, in my project I was fixing a straightforward UI-related bug, and the AI wasn’t figuring out the simple fix. Today, I used Chat/Composer and I put more effort and energy into really guiding DeepSeek more than usual. Eventually, I was able to figure out the UI-related bug in my project that was wasting soo many pro credits the past few days. It turns out it really was something so simple, as expected.

I always start a new composer and so I don’t think that was it.

I also tried both Chat and Composer.

Maybe I need to remember to be more sensitive to infer when the DeepSeek is overcomplicating something so simple

and sort of hold its hand and guide it with a more granular approach when I feel like it tends to want to overcomplicate the task at hand.