It seems to me that the fourth model has become very stupid

Over the last few weeks my questions have been taking a very long time to be processed.
It seems that on Thursday I often began to receive that the Open API is not available

After a couple of hours the connection was restored and my questions began to be processed very quickly

But the GPT-4 model has become very stupid
The model forgets the context after just one question

Now I went to the settings Cursor - The AI-first Code Editor

And I saw what’s in the screenshot

I haven’t used GPT-3.5 for over a month since I bought a pro subscription

Are these problems interrelated?
Did you ruin something?

The result of working in cursor is catastrophically bad, it’s like working with a very stupid junior developer

Or is this a penalty due to the frequent use of the fourth model?

1 Like

That’s because you’ve hit your limit of fast requests. So you’re using slow requests now, which are placed in a queue whose length can vary depending on traffic.

Fast and slow requests should have the same quality though. Do you have an example?

I experienced something similar. I’ll need to dig up the examples, but often the case was that, say, I linked a database schema, specifically said something like “you can find the schemas in the supabase/migrations directory”. Then hit Cmd Enter to chat with the entire codebase, and it answered the question correctly.

Then literally after that I’d ask the question about querying the same schema (hitting Enter this time), for it to respond with Without knowing the structure of your database.... So I had to explicitly respond with “you have access to the schema, I just provided it to you”. Then it’d apologize and respond correctly.

I guess you could say it’s designing the prompt, but then it used to be aware of context like this previously?

Yes, they definitely optimized cursor

I think they made automatic switching between models, this explains why gpt-4 model became very stupid and does not remember the context of the conversation after a couple of messages

Oh you mean like if you hit the limit in gpt-4, it switches you to 3.5 so you get the answer quicker?

I always seem to ”get in line” when I hit the limit, so it’s clearly gpt-4 at least on the frontend of it. But then it still may lose the context sometimes :person_shrugging:

Cursor definitely began to respond very quickly
Since Thursday or Friday, I don’t remember exactly

And since then I’ve been getting very stupid answers
I can spend about thirty minutes explaining what I need and still not get an accurate answer

I tried to ask questions that I asked a few weeks ago and received incorrect answers, as if the model did not know the answers

I told the model that you were wrong and you knew the answers before, the model apologized and agreed that the answer was not correct, then the model gave the correct answer

It looks like they are using two phases to process the question

In the first phase, they analyze your question, enrich the question with restrictions or additional information, then the question goes into some model, perhaps GPT-3.5 or GPT-4

Perhaps the choice of model depends on the workload of the service, or on your remaining limit of questions for the fourth model

I’ve been using Cursor for seven weeks, I’m sure that before questions were processed correctly

I asked the fourth model a question - what version of Java LTS do you know - the answer was 11, previously the answer was 17

There was also a question about Spring Boot - the current answer is 2.5.5, previously the model answered - 3

GPT-3.5 gave me the same answer
The service has limited me and I can no longer ask questions to GPT-4?

If I’m not mistaken, there’s a deal out there: Upgrade to an $80/month plan and get 2k fast GPT-4 requests per month.

I’m not talking about fast requests here, I’m talking about that possibly slow requests to the fourth model are processed by GPT-3.5

Hmm exactly sure what’s going on here. You should only get GPT-4 responding if you’ve chosen GPT-4 in the chat dropdown.

Screenshot 2023-12-05 at 12.32.51 PM

If you turn off Privacy mode in settings and send us a screenshot, we’ll be able to look at what went wrong with your prompt. Certainly want to get this sorted.

1 Like

Hey! Anecdotally, cursor has gotten a LOT worse with codebase context as of the last week. I haven’t been relying on it nearly as much as I was earlier. Half the time when I attach snippets model can’t see the snippet, will make same suggestion over and over for debugging, lots of other little things. On stable. No idea why, once again subjective experience, but mirrors some other people’s experience on the forum.

mmm gotcha. we will seriously investigate why this is happening and fix it if we caused a regression. out of curiosity is there a prompt/question that could serve as a regression test for us.

maybe a question about an OSS repo that we can index and ask that it now gets wrong. any replication will help a lot in fixing this issue asap!

Yes, we really want to fix this ASAP - what is your email/github that you’ve logged in with for Cursor?

are these normal questions or codebase context questions?

Jesus Christ the “fourth model” gives such garbage answers

Questions both normal and with codebase context

My Privacy mode is turned off from beginning of using cursor, to help you make the product better

I am registered on the forum under the same email with which I am registered in the cursor

So you can check everything, plus yesterday I made a Report Issue using the IDE cursor, later Jakob wrote to me and I answered him

Guys, when can we expect things to improve?

Here’s a good recent example. I was asking for some CSS Flexbox suggestions. I linked the components, it can see the file, it can see that I’m using Tailwind classes, and Tailwind Variants library that does not entail writing classes directly on the element.

And yet it suggested creating a style="" property on the element. Why in the world?

That said, there’s this tweet though (https://twitter.com/ChatGPTapp/status/1732979491071549792), so I wonder if it’s truly a Cursor or ChatGPT issue? Could be both?

CleanShot 2023-12-08 at 11.32.08

2 Likes

I think that laziness meant that the model refuses to write code, and instead offers to write a solution to the user’s question himself

The problem that I described in this topic concerns not only stupidity, but also the problem of loss of context and forgetfulness of a model that has not previously been observed

The context is forgotten not only within a specific chat, but also if you ask a question in the current chat but switch to another tab with the code

I have noticed this as well. Not sure if it has something to do with GPT-4. But, I use Cursor primarily because it can provide the relevant context to GPT automatically. So, if the answers are missing the context I think Cursor team should investigate this.

darinkishore on github! sorry, didnt realize you were replying to me.