Ability to easily orchestrate an LLMs context/attention across files + easily embed external sources to use as context are both a big part of what makes Cursor special.
Being able to connect to an 1+ external vector databases to use as RAG for context when wanted would be an amazing next step on this path.
This could include not only private DBs but also public embedding sources (e.g. kay.ai, alexandria). Imagine too as book authors begin realizing their books should be delivered/sold in a format that LLMs can fully utilize (e.g. linking to an O’Reilly VDB for a preferred software architecture approach).