We need a context capacity indicator to see how much of the available context is left, so that we don’t outgrow the context and cause a bad situation.
That would be great but using full context with regular models is a very bad idea. The AI models start to hallucinate and get confused by parts from context that isnt relevant.
Focused instructions work very well. Just telling it to do everything not so much.
that we need is being able to see the context that we are sending, as at the moment we have absolutely no idea about what the IA is taking in context
after the first few request, everything is a blackbox
Thats almost impossible as there is so much in between you and the actual final AI processing like RAG for code, docu, websites requested, searches, then rules you add,… including all tool usages which consume tokens and AI requests.
Not even the first request is just what you have in your chat but much more.
Just need a simple indicator, he doesn’t need to be too precise. Just as a reference for opening a new session.