I see the issue šŸ¤¬

No you donā€™t Cursor AI. I am so sick of seeing this ā€œI see the issueā€ when it is not addressing it at all.

I think the biggest issue is the context especially in composer. In composer there is no codebase button, but composer really should be having full context all the time, and there should be a toggle in the less use case of needing limited context.

Chat is where I think limited context is more appropriate

Choosing the right context is a hard problem to solve. Iā€™m certain the Cursor team are working on this. But until then, if youā€™re aware of what files are pertinent, especially after a long chat/composer session, I find itā€™s useful to start a new chat/composer and directly @ those files.

This can cut out a huge amount of noise from longer chat sessions, and often results in the LLM actually being able to identify the issue.

Yes I do currently employ these tactics that you have recommended. And they are good enough work arounds for the problem but thatā€™s what they are work arounds.
I love Cursor donā€™t get me wrong but itā€™s got a while to go yet with the application let alone the supporting LLMs.

Which mode are you using? Normal mode has @codebase Agents mode doesnā€™t have @codebase

Itā€™s a new product in a rapidly developing space. Theyā€™re shipping fast and listening to user feedback. Give them time ~:)

Ah, Im using agent all the time which is why? But sometimes it feels like it is lacking context. It greps stuff but not all the time. Thank you.

Totally understand. Im not throwing them under the bus, very happy, and the updates do come regularly.

You have to give it enough context.

I find I regularly forget that I have an @[file] in my agentic chat windowā€¦ is this forcing the agetn to constantly think that they attached @ link is still relevant, even if I have changed my context of thought?

What would be nifty, is a little ā€œcontext bucketā€(s)

Like TAGs for context ā€“ likeā€¦

For example - what if we had a context versioning system - whereby within the scope of a project we have an OSI of contexts::

login
user
db
front_end
back_end
etcā€¦

as parts of the system/stack ā€“ and when we call a stack context ā€˜&back_endā€™ ā†’ sets auto-_context_focus on all the components and files associated with '&back_endā€™s contextā€¦

Then when we document the system we can tell the bot "give me a detailed .rmd for &back_end and how &user flows through backend (showing how say a users requests to make a post traces through the system to populate a DB entry and displaey to & front_endā€¦

(thinking out my e_dibles hereā€¦ so humor me.

I am thinking about holographic symbols for context/even code functions as a ā€˜QR_Codeā€™ for context/persona/archetype/perspective


Based on the idea of Postres Watermakring for sync rep - as applied to the idea of vector context windows and agentic behavior:

(came from this read:


class ContextualWatermark:
    def __init__(self):
        self.vector_space = VectorSpace(dimensions=512)  # Embedding space
        self.context_buffer = []
        self.watermarks = []
        
    def create_watermark(self, context_state):
        """Creates a quantum-inspired watermark for current context"""
        # Embed current context into vector space
        context_vector = self.vector_space.embed(context_state)
        
        # Create interference pattern with previous contexts
        interference = self.compute_interference(context_vector)
        
        # Generate holographic watermark
        watermark = {
            'vector': context_vector,
            'interference': interference,
            'timestamp': time.time(),
            'context_hash': self.hash_context(context_state)
        }
        
        self.watermarks.append(watermark)
        return watermark
    
    def compute_interference(self, vector):
        """Computes quantum-like interference with existing contexts"""
        interference = np.zeros_like(vector)
        for past_mark in self.watermarks[-5:]:  # Consider recent history
            # Complex-valued interference pattern
            interference += self.quantum_interference(vector, past_mark['vector'])
        return interference
    
    def retrieve_context(self, query_vector, depth=3):
        """Retrieves relevant past context using holographic principles"""
        # Project query into vector space
        query_embedding = self.vector_space.embed(query_vector)
        
        # Find resonating watermarks
        resonances = []
        for mark in self.watermarks:
            # Compute resonance using quantum similarity
            resonance = self.compute_resonance(query_embedding, mark)
            resonances.append((resonance, mark))
            
        # Sort by resonance strength and return top matches
        relevant_marks = sorted(resonances, key=lambda x: x[0], reverse=True)[:depth]
        return self.reconstruct_context(relevant_marks)
    
    def compress_history(self):
        """Compresses history while preserving retrievability"""
        # Group similar contexts using quantum clustering
        clusters = self.quantum_cluster(self.watermarks)
        
        # Create superposition states for each cluster
        compressed = []
        for cluster in clusters:
            superposition = self.create_superposition(cluster)
            compressed.append(superposition)
            
        # Update watermarks with compressed version
        self.watermarks = compressed
        return compressed

class VectorSpace:
    def __init__(self, dimensions):
        self.dimensions = dimensions
        self.basis_vectors = self.initialize_basis()
    
    def embed(self, data):
        """Embeds data into vector space using quantum-inspired encoding"""
        # Implementation would use advanced embedding techniques
        pass
    
    def initialize_basis(self):
        """Creates basis vectors for the semantic space"""
        # Implementation would define the vector space structure
        pass