I have noticed that when trying to use Cursor for processing multiple files, AI can do a good job for a few first files, but later on it has no idea what was the original goal and makes its own decision of what the goal is. That is true even when the original prompt consists of only two lines (one of them “DO NOT DO ANYTHING YOU HAVE NOT BEEN ASKED FOR!”).
I think the problem is mostly due to AI models, not the Cursor itself. However, Cursor may resolve it.
Let me explain. First time when asked AI, AI has only the 1st prompt. If there is a follow up conversation, the whole conversation is send back to AI, 1st prompt + 1st output + 2nd prompt. In the following turn AI has 1st prompt + 1st output + 2nd prompt + 2nd output + 3rd prompt, and so on. Since after a few processed files I am asked if I am happy with the result, the size of the conversation expands very fast. No wonder, AI is confused after a few iterations, and stops doing its job (probably reaching its context window).
I would appreciate if I had the possibility to send AI my prompt without the previous conversations. In my case of processing multiple files, it would be processed on the Cursors side something like this:
- Figure out what files will be processed.
- Loop through all the files and send the AI the file and my polished prompt without sending the whole conversation. No problem if I am asked after a few processed files whether it is still OK (it may be beneficial for both sides). The only thing that matters is not to lose the original goal by the injection of the frequent intermediate conversations.
I can imagine more use cases where no history prompting is useful. So, I expect one of the following solutions:
a) a magic instruction when prompting, such as “Ignore conversation history”,
b) a checkbox “with/without conversation history”, or
c) an automatic decision whether it is beneficial to send the AI the whole conversation history or not.