No you don’t Cursor AI. I am so sick of seeing this “I see the issue” when it is not addressing it at all.
I think the biggest issue is the context especially in composer. In composer there is no codebase button, but composer really should be having full context all the time, and there should be a toggle in the less use case of needing limited context.
Chat is where I think limited context is more appropriate
Choosing the right context is a hard problem to solve. I’m certain the Cursor team are working on this. But until then, if you’re aware of what files are pertinent, especially after a long chat/composer session, I find it’s useful to start a new chat/composer and directly @ those files.
This can cut out a huge amount of noise from longer chat sessions, and often results in the LLM actually being able to identify the issue.
Yes I do currently employ these tactics that you have recommended. And they are good enough work arounds for the problem but that’s what they are work arounds.
I love Cursor don’t get me wrong but it’s got a while to go yet with the application let alone the supporting LLMs.
I find I regularly forget that I have an @[file] in my agentic chat window… is this forcing the agetn to constantly think that they attached @ link is still relevant, even if I have changed my context of thought?
What would be nifty, is a little “context bucket”(s)
Like TAGs for context – like…
For example - what if we had a context versioning system - whereby within the scope of a project we have an OSI of contexts::
login
user
db
front_end
back_end
etc…
as parts of the system/stack – and when we call a stack context ‘&back_end’ → sets auto-_context_focus on all the components and files associated with '&back_end’s context…
Then when we document the system we can tell the bot "give me a detailed .rmd for &back_end and how &user flows through backend (showing how say a users requests to make a post traces through the system to populate a DB entry and displaey to & front_end…
(thinking out my e_dibles here… so humor me.
I am thinking about holographic symbols for context/even code functions as a ‘QR_Code’ for context/persona/archetype/perspective
The agent mode of AI does sometimes need prompting to look at the folder structure.
This is especially the case in big projects or big conversations where the AI may assume it has all the context it needs from the previous conversation, so it doesn’t want to use a tool call to look up the project structure again or look inside any files that it feels it already knows about.
As others have said, we are working hard on this as context is the key for really good responses from even small models. But it’s a tough one to perfect right now.
As time goes on I’m sure this will get better. But the suggestions above are the best workarounds for now!
I dont know how hard it is to implement but possibly an “acquired context” button you can press that shows you all of the files, folders, snippets, docs etc that you have provided for context and a weighting it has in the upcoming prompt.
eg. you press “acquired context”
it shows:
file3.md | uploaded 1 prompt ago | 0.95 weight
file1.js | uploaded 2 prompts ago | 0.75 weight
file2.pdf | uploaded 10 prompts ago | 0.1 weight
This way we can see everything that we have provided it and what it is still taking into consideration and if we need to provide the context files again To essentially wake it up and use the most recent text or at least minimise the amount of hallucination.
It seems like the cursor indexing does a similar thing when you provide full codebase context.
I might not understand LLM but this is how it feels in use, If you provide a contextual file it seems to become less and less accurate or used over proceeding prompts.
It’s unfortunately not quite that simple about how the context works behind the scenes, as it changes between every prompt you ask. But I agree that more visibility is not a bad thing here!
I made a similar complaint a while ago but I was advised to write “better prompts” even though I knew the issue is way deeper than that. Sometimes you give explicit instructions for it to follow but still won’t adhere. When you ask it why, it says “I’m sorry I should’ve followed…” Cursor is simply not good enough for complex projects yet and I appreciate the fact that the developers are working towards making it better.