Has anyone successfully ran RAG in Cursor?
If so, can you give an over view of your experience.
- Hardware
- Operating System
Has anyone successfully ran RAG in Cursor?
If so, can you give an over view of your experience.
I haven’t yet. I tried to add a firecrawl MCP server so that it can scrape documentation and put it in a docs folder in the project so that the docs can get indexed, but Cursor’s MCP support with WSL2 is in rough shape right now so I never got it working.
Hoping that they can get MCP working cleanly on WSL2 soon because I’ve found that Cursor does a much better job avoiding hallucinations and writing idiomatic code if the docs for a given library are included directly in the codebase and then referenced with cursorrules.
Have you tried using Lang Graph to accomplish that?
I’ve stayed away from MCP on purpose. I try and stay as vanilla as possible at this time.
I haven’t, but I have worked with LangGraph on a RAG project for a client. I’ve been working with MCP for the past couple of weeks, we are actually converting our internal RAG app to use MCP tooling and from what I’ve seen so far it seems simpler than LangGraph. I think they can actually compliment each other well though and it doesn’t necessarily need to be one or the other.
Anyway, this is getting away from the topic of the thread, just interesting that you think LangGraph is more vanilla than MCP. My experience so far has been the opposite, but I don’t have enough in-depth experience with either to have a super strong opinion either way.
You’re good, we’re in “Discussion”… I am all about Ai and interested in hearing what others are doing. My thoughts on MCPs where they are an extra piece that make things “easier” but also can fail, making them another part to worry about, vs. coding things up. However, if MCPs are becoming the standard, then I probably should be looking into going that route.
My current project has 4 agents that assist the user in completeing a complex document. A 5th Conversational Ai agent to assist the user. Each segment relies on the last set of research data completed. I’m also using CAG to deliver user guidence. I have lang graph in there as well, so the user can jump back to pervious sections and make changes.
Yeah MCP definitely comes with some of it’s own headaches, but from what I can tell it helps tame a lot of the complexity of using disparate tools. Like I mentioned, our multi-agent RAG app is using LangChain and LangGraph, but as it’s growing in scope the complexity is starting to get a little out of control. The thought is that with MCP we can standardize some of this tool use to make it easier to maintain.
The other big benefit that I am starting to see signs of is that since it’s looking like MCP is going to be the new standard, it is going to much easier for others to build tools and frameworks around LLM driven apps and we can also make use of other’s MCP servers. Similar to how you don’t usually need to implement any low level functionality in a Python or JS app because there are already so many libraries for any given use case, I think the same is going to happen with MCP. Instead of implementing the memory, prompts, etc. when building a RAG app, people will instead be able to just use a pre-built MCP server for RAG and then add any customization or implementation details on top of it. It’s not as fun to be a plumber and you don’t learn as much, but it’s better for building things quickly by benefiting from the work of others. I just don’t see this happening with LC/LG.
That said, there is still a place for LC/LG in the context of MCP because you may want to call the MCP tools as part of a graph, or create higher order tools on the server that are composed of many smaller tools chained together with LC. Even with MCP you still need to handle memory management and prompts, it just makes it a little more organized. I did just come across a new framework though that looks pretty promising: GitHub - lastmile-ai/mcp-agent: Build effective agents using Model Context Protocol and simple workflow patterns
I will have to give MCP a shot on my next project and compare. I totally get it .. Since we have been chatting it up, I’ve added “CAG”. It gets complex pretty quick. I’ve also decided to add a “Conversational AI” to guide the user through the process. I’ll have 5 Agents total when it’s all done.
Not sure what LLMs you are using, but I’ve been a huge fan of Perplexity since it first came onlline. In fact, it was my first experience with AI. I’m using their Deep Research and Sonar-Pro. The research it’s pretty amasing and not to bad on price.
I have a GPT pro subscription so I mostly use that for conversational/brainstorming and deep research(which is also impressive on GPT/OAI).
As far as the models that back the apps I am building I am using AWS Bedrock exclusively, mostly using Claude-3.5-Sonnet as the model. I would like to try some of the other providers, but Bedrock is the only host that I am aware of that guarantees that your data won’t leave your environment. If you were to use OpenAI/GPT for a RAG app for instance they can store your data for training, etc. My clients are not ok with that so until other hosts can provide that data protection I’ll have to continue using AWS Bedrock exclusively for AI powered apps.
While im not a fan of OpenAI, can you elaborate why you think they store your data for training if you use their API? Any proof because OpenAI docu says otherwise. (They do store data for compliance purpose for 30 days as they say)
They may not use your data for training, I wasn’t trying to say that from a place of authority. To my knowledge though, there is no provider other than Bedrock that doesn’t store your prompts or completions(which AWS guarantees).
My experience with clients has re-enforced this as well since they are specifically requesting Bedrock for this reason. Maybe Bedrock has just done a much better job at marketing this, or OpenAI has changed their policy regarding user data recently because I thought for sure I remembered reading that they had access to your data for model improvement back when I first started using the OpenAI api about 2 years ago.
Are you experienced in Deployment?
It depends on what you mean by deployment. If you’re referring specifically to deploying MCP I haven’t deployed any MCP servers to production, I’m still doing initial development with them and have only run them locally. If you’re referring to deployment in general, or to the cloud, then yes I am a DevOps engineer so managing deployments is a huge part of my job.
Yes, I’m talking about deployment in General.
My background is in UX/UI. I’ve never really coded full stack until Cursor. The transistion was smooth because of my background. I did know the basics. I’m at a point where I want to push my project to a VPS. Is this something you would be interested in doing for a fee?
Yeah most likely. I don’t see a way on here to PM you my email address and I don’t want to make it public here, but if you want to either give me your email, or you could contact me on discord btrippcode_52407
HMU on LinkedIn:
/in/uxuiburnett/