That’s super easy just type @ and file name (I use copy relative path extension) or you can press + sign on top of the chat
it’s convenient when you have the file opened already
That’s super easy just type @ and file name (I use copy relative path extension) or you can press + sign on top of the chat
it’s convenient when you have the file opened already
We are talking about different things. I meant Cursor has had the ability to /Reference open editors.
If Long Context is gone I’ll have to search for an alternative that has the same feature.
If we assume cost is the problem: it’d be nice to have the option to pay for this.
Thought this was an experimental/beta feature to begin with?
Possible they are removing it to refine and re-add later I suppose.
To be fair, was long context an experimental/beta feature? If so, I’m no longer mad. I’m just disappointed.
What is the long context mode, I used Cursor for a long time, but I didn’t seem to see this feature?
What exactly is the “long context” and what it means used it with own keys? I been using cursor for a month now, but not sure what that feature was. can you please explain? Ie: loading up a big file with code for reference? or what? And how was it useful, this would be helpful in my learning progression and probably many others. please share.
The problem is how we should know how much of the page or context will be sent into the model.
Yep and they won’t respond to questions regarding the removal of the long context models. Multiple threads over the last week are being ignored.
Thanks for sharing your thoughts, and apologies for the radio silence! I wanted to explain our reasoning behind the 0.43 feature deprecations. We love shipping early experiments to get feedback (finding what you all love is an important part of improving Cursor!), but the maintenance overhead from some of these experimental features can slow down our ability to move the product forward.
Also definitely hear you all on the communication front - moving forward, we’ll clearly document deprecations in the changelog so users can choose to stay on earlier versions if needed. Really appreciate your feedback as we work to improve here!
I’m glad you responded.
I’m also very stuck in my cursor usage without the long context chat which was * the feature * that brought me into the fold.
I literally refactored my repos based on a code and documentation structure which made it easy to prompt long context and get very high quality codegen from claude-3.5-sonnet-200k.
I’m flipping back to zed editor for now which is the next best prompt composer experience without this long context option.
I’ll be able to come back to cursor when the long-context feature returns even if that requires an additional premium pricepoint or subscription / byo API key model.
Thank you for taking time to respond.
we’ll clearly document deprecations in the changelog so users can choose to stay on earlier versions if needed
This! Please!
The ‘Agent’ feature is completely useless because RAG is not reliable. For implementing any new feature that requires more than 3000 tokens, the feature completely falls apart. The long context was a godsend, it made complex tasks across codebases much easier to accomplish. Not happy about it’s disappearance either. Big step down for Cursor. And also, a lack of o1 integration in Agent mode is a huge misstep.
+1 to all the above. For me most long context mode’s value was assurance that the totality of what I referenced would be passed to the LLM, vs. the uncertainty of leaving excerpting up to the retriever / chance.
Exactly. @rishabhy is long context going to return?
I was in the process of doing the same. Using long context to generate documentation files, so that RAG for normal requests has system awareness. Now I can’t generate the docs very well.
Maybe I can run older Cursor build on another Mac profile…
Thanks for the tip on zed editor!
I don’t understand why they’d ditch long context when one of the most frequent complaints is cursor being bad at large projects and the weakest aspect vs windsurf is context management
Looking at the posts from many users on this thread, it seems that if the experiment of this feature was to see if users found it useful, then it passed.
If the experiment was something else, could you share more info please?
The removal of the feature moved the product backwards.
So saying that maintaining the feature would have slowed down the ability to move the product forward seems odd.
I think it’s similar to saying ‘Having headlights on the car created maintenance overhead that reduced our ability to move forward with safety features. So we’ve removed the headlights to accelerate our safety innovation.’