Cursor frequently ignores rules that I establish even if I remind it that the rule exists. My understanding is that is caused by context window limits (because you’re sending everything to an LLM for a response regardless of whether or not there is local knowledge in the repository that we are working from). I would prefer to see this implemented in a way that supports a local RAG module that we can seed with relevant knowledge. This would enable interactions directly with the RAG first (no LLM API key usage required) and present back to us a plan for how to implement what we are asking about by using LLMs, very likely reducing the number of context-consuming round-trips that we need to make to get to an acceptable plan. I do realize that I could try to implement this feature myself using an MCP server and rules (but again, rules are frequently ignored) that sits in front of my own RAG API, but I’d rather this be a native feature. Each project that we work on using Cursor could have it’s own set of knowledge inputs relevant to streamline development. I’d even be willing to pay an add-on fee for RAG database storage.
An alternative to this would be to create an API spec that I need to implement to include as a sort of middleware to interface with my own Graph-Enabled RAG environment. Then give me a way to associate my end point with a project. Effectively, this would be like adding a RAG tool that I can use, but I’m the one who manages the RAG environment.
I supported you, and I also asked at this forum a similar question earlier and even suggested a solution. If you like experiments, you can pay attention to the project of effective error correction:
and my experienced recent project of an active memory bank optimized for AI assistants (early alpha, but it works, and it’s very simple, it helps me
1 Like