Codebase composer agent

Is there no way, or in fact, is there anyway that the agent in composer can read and understand the entire codebase? When working on big projects, and the history or progress, the chat, whatever you wanna call it, in the composer gets too long, we have to open another one. But, when opening another composer that data, the context, alot of strategy, and especially when theres so much code, its like you have to semi restart so many things. Maybe im missing something?

The LLM’s performance can start to drop when the conversation gets too long, as too much context and chat history can “confuse” the AI and cause it not return good answers to your questions.

Even if the Composer did not have a length limit, I’d still recommend routinely starting a new Composer every so often to ensure the context the AI has is concise and on topic regarding the changes you are trying to make.

A good way to simplify this is to write a markdown file (or multiple) that explain your project structure, and @ it when you start a new composer, so it can get up to speed quickly!

yes, but, in terms of technical structure and detailed strategy, each composer lets say, would have a plethora of strategies being implemented.when starting a new one, when moving onto lets say the next part of the strcutrure/infrastructure, the context of the strategies in different files wouldnt be up to speed, so now the agent would proceed without being aware of all that contextual info from so many files, and then go on and do random things without accounting for what may already be done. In this proccess it also creates new files unnecesarily for things and strategies already done and implemented. How can this be solved, and i dont believe this would be something difficult to do on your guy’s end. What can i do and what do you recommend

We’re looking to add better features around this, as a “memory” that syncs for your project, but works across your Composer sessions would work well here!

when can we expect something like this? An estimated time frame perhaps

This is an naive question - but could you spawn an additional [thing] on your end where multiple agents listen to different segments of the context window - say certain number of tokens, then on stops and another spawns at that timestamp… and when context starts to fail, you pass query up to the other agents - and then the context_tail fades out at some other velocity.