"i see the issue" from the ai

No you don’t Cursor AI. I am so sick of seeing this “I see the issue” when it is not addressing it at all.

I think the biggest issue is the context especially in composer. In composer there is no codebase button, but composer really should be having full context all the time, and there should be a toggle in the less use case of needing limited context.

Chat is where I think limited context is more appropriate

Choosing the right context is a hard problem to solve. I’m certain the Cursor team are working on this. But until then, if you’re aware of what files are pertinent, especially after a long chat/composer session, I find it’s useful to start a new chat/composer and directly @ those files.

This can cut out a huge amount of noise from longer chat sessions, and often results in the LLM actually being able to identify the issue.

Yes I do currently employ these tactics that you have recommended. And they are good enough work arounds for the problem but that’s what they are work arounds.
I love Cursor don’t get me wrong but it’s got a while to go yet with the application let alone the supporting LLMs.

Which mode are you using? Normal mode has @codebase Agents mode doesn’t have @codebase

It’s a new product in a rapidly developing space. They’re shipping fast and listening to user feedback. Give them time ~:)

Ah, Im using agent all the time which is why? But sometimes it feels like it is lacking context. It greps stuff but not all the time. Thank you.

Totally understand. Im not throwing them under the bus, very happy, and the updates do come regularly.

You have to give it enough context.

I find I regularly forget that I have an @[file] in my agentic chat window… is this forcing the agetn to constantly think that they attached @ link is still relevant, even if I have changed my context of thought?

What would be nifty, is a little “context bucket”(s)

Like TAGs for context – like…

For example - what if we had a context versioning system - whereby within the scope of a project we have an OSI of contexts::

login
user
db
front_end
back_end
etc…

as parts of the system/stack – and when we call a stack context ‘&back_end’ → sets auto-_context_focus on all the components and files associated with '&back_end’s context…

Then when we document the system we can tell the bot "give me a detailed .rmd for &back_end and how &user flows through backend (showing how say a users requests to make a post traces through the system to populate a DB entry and displaey to & front_end…

(thinking out my e_dibles here… so humor me.

I am thinking about holographic symbols for context/even code functions as a ‘QR_Code’ for context/persona/archetype/perspective


Based on the idea of Postres Watermakring for sync rep - as applied to the idea of vector context windows and agentic behavior:

(came from this read:


class ContextualWatermark:
    def __init__(self):
        self.vector_space = VectorSpace(dimensions=512)  # Embedding space
        self.context_buffer = []
        self.watermarks = []
        
    def create_watermark(self, context_state):
        """Creates a quantum-inspired watermark for current context"""
        # Embed current context into vector space
        context_vector = self.vector_space.embed(context_state)
        
        # Create interference pattern with previous contexts
        interference = self.compute_interference(context_vector)
        
        # Generate holographic watermark
        watermark = {
            'vector': context_vector,
            'interference': interference,
            'timestamp': time.time(),
            'context_hash': self.hash_context(context_state)
        }
        
        self.watermarks.append(watermark)
        return watermark
    
    def compute_interference(self, vector):
        """Computes quantum-like interference with existing contexts"""
        interference = np.zeros_like(vector)
        for past_mark in self.watermarks[-5:]:  # Consider recent history
            # Complex-valued interference pattern
            interference += self.quantum_interference(vector, past_mark['vector'])
        return interference
    
    def retrieve_context(self, query_vector, depth=3):
        """Retrieves relevant past context using holographic principles"""
        # Project query into vector space
        query_embedding = self.vector_space.embed(query_vector)
        
        # Find resonating watermarks
        resonances = []
        for mark in self.watermarks:
            # Compute resonance using quantum similarity
            resonance = self.compute_resonance(query_embedding, mark)
            resonances.append((resonance, mark))
            
        # Sort by resonance strength and return top matches
        relevant_marks = sorted(resonances, key=lambda x: x[0], reverse=True)[:depth]
        return self.reconstruct_context(relevant_marks)
    
    def compress_history(self):
        """Compresses history while preserving retrievability"""
        # Group similar contexts using quantum clustering
        clusters = self.quantum_cluster(self.watermarks)
        
        # Create superposition states for each cluster
        compressed = []
        for cluster in clusters:
            superposition = self.create_superposition(cluster)
            compressed.append(superposition)
            
        # Update watermarks with compressed version
        self.watermarks = compressed
        return compressed

class VectorSpace:
    def __init__(self, dimensions):
        self.dimensions = dimensions
        self.basis_vectors = self.initialize_basis()
    
    def embed(self, data):
        """Embeds data into vector space using quantum-inspired encoding"""
        # Implementation would use advanced embedding techniques
        pass
    
    def initialize_basis(self):
        """Creates basis vectors for the semantic space"""
        # Implementation would define the vector space structure
        pass

The agent mode of AI does sometimes need prompting to look at the folder structure.

This is especially the case in big projects or big conversations where the AI may assume it has all the context it needs from the previous conversation, so it doesn’t want to use a tool call to look up the project structure again or look inside any files that it feels it already knows about.

As others have said, we are working hard on this as context is the key for really good responses from even small models. But it’s a tough one to perfect right now.

As time goes on I’m sure this will get better. But the suggestions above are the best workarounds for now!

Continuing the discussion from I see the issue :face_with_symbols_over_mouth::

I dont know how hard it is to implement but possibly an “acquired context” button you can press that shows you all of the files, folders, snippets, docs etc that you have provided for context and a weighting it has in the upcoming prompt.

eg. you press “acquired context”
it shows:
file3.md | uploaded 1 prompt ago | 0.95 weight
file1.js | uploaded 2 prompts ago | 0.75 weight
file2.pdf | uploaded 10 prompts ago | 0.1 weight

This way we can see everything that we have provided it and what it is still taking into consideration and if we need to provide the context files again To essentially wake it up and use the most recent text or at least minimise the amount of hallucination.

It seems like the cursor indexing does a similar thing when you provide full codebase context.

I might not understand LLM but this is how it feels in use, If you provide a contextual file it seems to become less and less accurate or used over proceeding prompts.

It’s unfortunately not quite that simple about how the context works behind the scenes, as it changes between every prompt you ask. But I agree that more visibility is not a bad thing here!

I made a similar complaint a while ago but I was advised to write “better prompts” even though I knew the issue is way deeper than that. Sometimes you give explicit instructions for it to follow but still won’t adhere. When you ask it why, it says “I’m sorry I should’ve followed…” Cursor is simply not good enough for complex projects yet and I appreciate the fact that the developers are working towards making it better.