Has the model changed or something?
Do you have an example?
For single shot chats things seem to work pretty well. But when using multi-shot chats, seems like gpt āforgetsā what was previously mentioned in the chat? gpt-4-32k doesnāt have this problem?
Yes, Iām using the default/paid cursor model
lately, chat.openai.com is doing a better job than cursor, imho
iām having to bounce between the two
Youād probably be better served using your own api key and using the gpt4-1106-preview model. Works great for me! I spam it all day and never pay more than 30$/mo in api fees
With a few file imports in the context, I easily come up with 10 euros a day.
Cursor is not usable at all I feel. I have spent like ~200 of my monthly fast requests just going back and forth with one thing, to the point where I had to rebuild my entire project from scratch by my own.
I donāt blame the team, itās probably the core GPT-4 model getting dumber. I just think that the product I subscribed to, and the one I now get to use, are completely different.
Same story bro
Since a few days I experience the same thing. I can not get the AI to do anything uswful, not even small and easy fixes. Cursor is totally useless for me in the current state. When using 3.5, I donāt even get messages anymore. Just a few lines of code that match my existing implementation with no explanation. Itās a shame!
I guess the mistral model is out now so before long you can just use a self-hosted model that will be gpt-4 equivalent. I do wish that you didnāt have to import file context by default because it probably makes my api bill 2-3x higher. Actually spent 65$ last month on gpt-4 calls in cursor.
I agree, I also have been noticing the same
Ack, we are looking into this to see whatās going. If you can, screenshots are super helpful for helping us pin down the problem.
If I may, though I donāt have a screenshot, but a lot of times, if I give it a function and ask it to refractor it to use a different framework in python, it only gives me an explanation of what needs to change instead of refractoring the function. If I am lucky and I give it 3 functions to refractory itās gonna do one function properly and have placeholders in the other 2 functions that basically say similar implementation to the first function. Speaking about python btw and this happens in the chat - Ctrl + L page.
Ditto, placeholders have gotten out of control, imho
Maybe we need a mode where gravity is full implementation?
Deployed a couple of changes yesterday the prompt prioritization that we think could have caused some of this. In the future, want to change the UI to give you a sense of what exactly went into the prompt.
If people have updated screenshots from the past day or two thatās super helpful in debugging any remaining issues.
Iāve encountered a similar issue. Could you please review the post I just published? Noticed a regression in Cursor AI
Is it just me? Or did the ādumbā effect go away? Iām guessing the gpt model got upgraded or something on openai side @truell20 ?