RepoQuery - Full 200K Context

I wanted a better RepoPrompt for my specific use case so I built RepoQuery.

  1. I write a detailed requirements document (prompt.md)
  2. I give RepoQuery the whole codebase (200k tokens)
  3. I paste the output response.md into cursor
  4. I sit back and watch the magic happen

Why is this needed?

Cursor is really really good at writing code but it’s whole codebase understanding is very limited, and it has to be as you will see when using RepoQuery a 200k context window is really expensive.

I only just knocked out the first public version of this so I’m sure it’s full of bug, it costs real money (your own API Key) so use at your own risk.

3 Likes

Great though, but how about we implement a chunk reading every 2k context and save it as a memory learning tool to recall what was done in order to follow along with the proper context within the environment of the codebase and the limited understanding? In short, this could help to process the 200k context window by resetting the context length every length or every specific token usage, in order to analyze and return to where it was left without compromising its usage fully by properly reading the codebase understanding by chunks and memory until they figure out a way.