Why does Cursor seem "dumb" lately?

Has the model changed or something?

5 Likes

Do you have an example?

For single shot chats things seem to work pretty well. But when using multi-shot chats, seems like gpt ā€œforgetsā€ what was previously mentioned in the chat? gpt-4-32k doesnā€™t have this problem?

Yes, Iā€™m using the default/paid cursor model

lately, chat.openai.com is doing a better job than cursor, imho
iā€™m having to bounce between the two

2 Likes

Youā€™d probably be better served using your own api key and using the gpt4-1106-preview model. Works great for me! I spam it all day and never pay more than 30$/mo in api fees

With a few file imports in the context, I easily come up with 10 euros a day.

1 Like

Cursor is not usable at all I feel. I have spent like ~200 of my monthly fast requests just going back and forth with one thing, to the point where I had to rebuild my entire project from scratch by my own.

I donā€™t blame the team, itā€™s probably the core GPT-4 model getting dumber. I just think that the product I subscribed to, and the one I now get to use, are completely different.

1 Like

Same story bro

Since a few days I experience the same thing. I can not get the AI to do anything uswful, not even small and easy fixes. Cursor is totally useless for me in the current state. When using 3.5, I donā€˜t even get messages anymore. Just a few lines of code that match my existing implementation with no explanation. Itā€˜s a shame!

1 Like

I guess the mistral model is out now so before long you can just use a self-hosted model that will be gpt-4 equivalent. I do wish that you didnā€™t have to import file context by default because it probably makes my api bill 2-3x higher. Actually spent 65$ last month on gpt-4 calls in cursor.

1 Like

I agree, I also have been noticing the same

1 Like

Ack, we are looking into this to see whatā€™s going. If you can, screenshots are super helpful for helping us pin down the problem.

1 Like

If I may, though I donā€™t have a screenshot, but a lot of times, if I give it a function and ask it to refractor it to use a different framework in python, it only gives me an explanation of what needs to change instead of refractoring the function. If I am lucky and I give it 3 functions to refractory itā€™s gonna do one function properly and have placeholders in the other 2 functions that basically say similar implementation to the first function. Speaking about python btw and this happens in the chat - Ctrl + L page.

Ditto, placeholders have gotten out of control, imho

Maybe we need a mode where gravity is full implementation?

Deployed a couple of changes yesterday the prompt prioritization that we think could have caused some of this. In the future, want to change the UI to give you a sense of what exactly went into the prompt.

If people have updated screenshots from the past day or two thatā€™s super helpful in debugging any remaining issues.

2 Likes

Iā€™ve encountered a similar issue. Could you please review the post I just published? Noticed a regression in Cursor AI

1 Like

Is it just me? Or did the ā€œdumbā€ effect go away? Iā€™m guessing the gpt model got upgraded or something on openai side @truell20 ?

1 Like