I’m sorry but I still don’t understand what you’re saying here. Maybe English is not your first language?
I know it’s just an example but - 10 minutes describing your situation! I’ve never spent that long before - or anywhere near that long describing my situation. 1 minute max.
So breaking this down:
You’re saying that you ask a very long question with lots of code, and you get a bad answer from Cursor.
You then tell Cursor it’s wrong and it asks you another question to clarify your original question. Then it gives you a better answer.
Okay, so this is kind of good. Best option is getting a good answer from the start, but at least it’s recovered, when you’ve told it you’re not happy.
Then you say you have a constant loss of context - so this is obviously not good. It’s not remembering the history of things you say to it I assume. I mean if you spent 10 minutes writing a question and submitted hundreds of lines of code - I suppose it’s expected at some point to forget some stuff.
Then you say, ‘I have no reason to believe that the cursor directly transmits your question to the fourth model’. I think you’re saying that at this point you’ve gone back and fourth to the llm 4 times and the original question you asked is no longer being sent? I guess that’s expected.
‘Everything is much more complicated here’. I’m not sure what you mean? Maybe, the way in which Cursor behaves is more complex for some reason at this point.
‘What I observed made me think that my questions did not fall into the fourth model at all, the answers came either from the cursor model or from GPT-3.5’. There’s only 2 models cursor uses - gpt-4 and gpt-3.5, so I assume you mean the 4th interaction with the original model (rather than the fourth model). You say the ‘cursor model’, but Cursor is the name of the application. I assume you mean that the original question you sent has been transformed into something completely different by the time you’ve interacted with the llm a few times, at which point you think that it is giving you answers from either gpt-3.5 or 4. Yeah, okay.
Thank you for giving an example!
From what you’ve sent - it sounds like it’s behaving just like any other LLM behaves. You go back and fourth through a conversation until you get the answer you want. The LLM will ask you for clarity on your initial question, but eventually forgets context as you go on.