The context for my message seems to be remembering obsolete information

I have a chat going to work through a unit test. I paste the failure message into the chat along with the relevant code implementation, and discuss with the model. But recently, when I send the model a new failure message, it seems to be still trying to fix the old failures, in addition to the new one, as if it is still being prompted with the previous failures and is not grasping that I’ve moved on. FWIW i never said explicitly that the error message has changed, but until recently I hadn’t had to; the model just got it. So I think this is a relatively new problem.

Seeing this happen with o1-mini.

I am pretty sure I’m NOT seeing this issue with other models, like gpt-4o-mini and claude-3.5-sonnet

this sounds strange! is there anything you can do to make it “reset”? trying to figure out what might cause this

The only thing I know of to make it “reset” is literally start a new chat thread

could you try either:

  1. clicking the previous message and then pressing enter to restart it from there
  2. cancel + generate to trigger it again

let me know what happens!

I believe I tried one or both of these things, but it didn’t help, unfortunately. I should also say, today i have been using o1-mini pretty reliably, I think this issue only happened to me maybe once, out of many chats.

I don’t know if this is useful or related, but just based on observation it feels like chats tend to have a max length beyond which the mini models just become less and less useful, producing less helpful/sensical advice. Maybe that’s totally nonsensical based on how context is maintained/stored throughout a chat, but thought I’d add the qualitative feedback just in case.