If Cursor uses OpenAI or other LLM models with limited input token, how can it index and use large amounts of documentation and codebase?
If Cursor uses OpenAI or other LLM models with limited input token, how can it index and use large amounts of documentation and codebase?