You train the lower model to lower the context

Now the editor is facing two big problems, the first is that you must train a small model that can reduce the token of the context to the extreme, which is very simple because you can do so well, there is a reason that it has trained a lot of small models and is very practical, but I am now engaged in engineering projects, the context is truncated for hundreds of lines, which annoys me very much, so can’t I compress the token of the context to the extreme? Also, you don’t just rely on github for code reflow, you have to do foolish code reflow, which is a huge improvement in the performance of your editor