Context boost to improve responses in long chats

If we assume a user keeps their chats relatively short and topic-focused, then longer chats may imply the user is having trouble getting to the result they want. This could be helped by making the LLM smarter by passing in a system message providing the context of what the user is trying to accomplish and the problems encountered so far.

For example, let’s say a user is struggling to get claude write code to accurately detect audio file codec types in a NextJS application. Over the course of a long chat, a reader (or LLM) could discern that 1) the user needs a solid function; 2) it needs to have proper separation of client and server logic; 3) code needs to be properly typed; 4) the user wants the functionality written to existing files; 5) the problems are centered around specific codec types; 6) the problems are observed in certain browsers; 7) the user is using an HTML5 audio component.

The desired user experience is to see the IDE get “smarter” over time in a chat.