Hi! The idea of RLM is to store the entire context in external memory, not LLM’s own context, and to use various techniques to search for current task relevevant information there. So LLM can keep its own context small without summarization. It may potentially lead to more stable and less expensive results.
I was just reading about this and had the same idea that it would be great to see in Cursor
Would love to hear what the Cursor team thinks about this research.