I’ve recently noticed a drop in performance after updating to 0.42.2.
The responses have become noticeably shorter and less comprehensive.
When I asked about this change directly at the chat, it provided an unexpected explanation. The AI claimed it had received instructions during our conversation to give shorter responses. However, I can confirm that I never made such a request.
There seems to be a discrepancy between the actual conversation and what the AI “remembers” or has been instructed to do.
Where could these supposed instructions be coming from if not from me?
When explicitly requested, the AI does provide more detailed responses. However, the overall quality of these responses has decreased compared to its previous performance.
For example, when I asked it to rewrite an entire class, some methods were simply omitted from the new version.
Before the drop:
I ask Cursor to define and implement a solution for something.
It does it.
It works.
After the drop:
I ask Cursor to define and implement a solution for something.
It does part of it and leaves comments for parts that still need to be done.
Obviously, it does not work.
That is the main one I use.
And that is where I’ve noticed this problem.
It could be happening elsewhere, but I wouldn’t notice as easily because my use of none claude 3.5 models is only 10% of total use.
90% is claude 3.5 sonnet.
I think we as community would have a much greater insight to these perceived performance peaks/valleys if we could actually see what cursor is sending to AI
i’m starting to think many problems are because cursors prompt revisions