Over the last few weeks my questions have been taking a very long time to be processed.
It seems that on Thursday I often began to receive that the Open API is not available
After a couple of hours the connection was restored and my questions began to be processed very quickly
But the GPT-4 model has become very stupid
The model forgets the context after just one question
That’s because you’ve hit your limit of fast requests. So you’re using slow requests now, which are placed in a queue whose length can vary depending on traffic.
Fast and slow requests should have the same quality though. Do you have an example?
I experienced something similar. I’ll need to dig up the examples, but often the case was that, say, I linked a database schema, specifically said something like “you can find the schemas in the supabase/migrations directory”. Then hit Cmd Enter to chat with the entire codebase, and it answered the question correctly.
Then literally after that I’d ask the question about querying the same schema (hitting Enter this time), for it to respond with Without knowing the structure of your database.... So I had to explicitly respond with “you have access to the schema, I just provided it to you”. Then it’d apologize and respond correctly.
I guess you could say it’s designing the prompt, but then it used to be aware of context like this previously?
I think they made automatic switching between models, this explains why gpt-4 model became very stupid and does not remember the context of the conversation after a couple of messages
Oh you mean like if you hit the limit in gpt-4, it switches you to 3.5 so you get the answer quicker?
I always seem to ”get in line” when I hit the limit, so it’s clearly gpt-4 at least on the frontend of it. But then it still may lose the context sometimes
Cursor definitely began to respond very quickly
Since Thursday or Friday, I don’t remember exactly
And since then I’ve been getting very stupid answers
I can spend about thirty minutes explaining what I need and still not get an accurate answer
I tried to ask questions that I asked a few weeks ago and received incorrect answers, as if the model did not know the answers
I told the model that you were wrong and you knew the answers before, the model apologized and agreed that the answer was not correct, then the model gave the correct answer
It looks like they are using two phases to process the question
In the first phase, they analyze your question, enrich the question with restrictions or additional information, then the question goes into some model, perhaps GPT-3.5 or GPT-4
Perhaps the choice of model depends on the workload of the service, or on your remaining limit of questions for the fourth model
I’ve been using Cursor for seven weeks, I’m sure that before questions were processed correctly
Hmm exactly sure what’s going on here. You should only get GPT-4 responding if you’ve chosen GPT-4 in the chat dropdown.
If you turn off Privacy mode in settings and send us a screenshot, we’ll be able to look at what went wrong with your prompt. Certainly want to get this sorted.
Hey! Anecdotally, cursor has gotten a LOT worse with codebase context as of the last week. I haven’t been relying on it nearly as much as I was earlier. Half the time when I attach snippets model can’t see the snippet, will make same suggestion over and over for debugging, lots of other little things. On stable. No idea why, once again subjective experience, but mirrors some other people’s experience on the forum.
mmm gotcha. we will seriously investigate why this is happening and fix it if we caused a regression. out of curiosity is there a prompt/question that could serve as a regression test for us.
maybe a question about an OSS repo that we can index and ask that it now gets wrong. any replication will help a lot in fixing this issue asap!
Here’s a good recent example. I was asking for some CSS Flexbox suggestions. I linked the components, it can see the file, it can see that I’m using Tailwind classes, and Tailwind Variants library that does not entail writing classes directly on the element.
And yet it suggested creating a style="" property on the element. Why in the world?
I think that laziness meant that the model refuses to write code, and instead offers to write a solution to the user’s question himself
The problem that I described in this topic concerns not only stupidity, but also the problem of loss of context and forgetfulness of a model that has not previously been observed
The context is forgotten not only within a specific chat, but also if you ask a question in the current chat but switch to another tab with the code
I have noticed this as well. Not sure if it has something to do with GPT-4. But, I use Cursor primarily because it can provide the relevant context to GPT automatically. So, if the answers are missing the context I think Cursor team should investigate this.