We also buy extra pack for more requests.
Thank you for acknowledging the situation, I understand that removing the long context window models is an intentional decision that reduces the overhead.
I have purchased multiple yearly subscriptions with the intention of supporting cursor on one hand, and having the long context models deal with my use-cases in cursor on the other.
I am now facing the reality of downgrading - and not being able to use the composer agent - or keeping up to date and not having long context, dilema that makes me deeply regret committing to cursor yearly with multiple subscriptions.
To this day, this has been the most i invested in a SaaS subscription, and while I still like cursor and the team behind, I would appreciate better communication on features that are in the critical path of users. Thank you, appreciate the hard work, and hope eventually being able to fully utilise the premium models will be back.
Sitting also on a yearly subscription until May with a large project and facing some regret now. For me, long context mode was really worth a lotā¦
Need long context back
I think Iāve found a solution:
- Install āllacoste2000.llm-copy-filesā (search vscode marketplace). Or perhaps there is a better extension. This one copies all regular open files (tabs), but it doesnāt copy diff editors, but neither did Cursor (when using /reference open editors). There is also an option to copy to a new file (which you can then edit before pasting into an LLM).
- (Optional) Create and pin a āCustom instructionsā file, add anything to there and it will always be copied together with your other files.
- Paste into an LLM or assistant of your choice: for example: Cline with your own API, Genie (VSCode extension) with your own API. In both of these you can set unlimited tokens IIRC, ChatGPT, or any API dashboard.
- Paste the response from the LLM to either Composer agent (so it can apply the changes), or if itās Cline it has its own Apply.
This essentially replicates Long Context Mode, the process stays relatively simple (if you can create a shortcut, quickly switch to an LLM and back with shortcuts), gives more flexibility (you can use any custom instructions), and more control - you can see exactly what you paste into an LLM. Apply should also probably work (although I havenāt tried).
I think this is a Win-Win situation. Cursor can focus on advancing their more popular features, and we can still utilize the functionality of Long Context with a few adjustments.
Iām still only beginning to use this alternative, but I can see it being more efficient and less buggy than the Long Context Chat was (where you had to constantly ensure all files get attached, not forget to switch to it from Normal Chat each time, only limited to Openrouter mostly when on your own API key, having to jungle between enabling and disabling āUse your own API toggleā), etc.
Possible improvements:
- If you can find an extension which also copies Diff editors and works like āllacoste2000.llm-copy-filesā otherwise, please post it here.
- If you find the best way to apply the output from an LLM, would appreciate it if you shared your experience. The Apply feature in Cursor chat was often broken (didnāt detect files correctly for example), so perhaps this will give more options and flexibility.
Update on extensions for copying open files:
Iāve found an extension that copies diff editors as well:
copyai.copyai
However, I had to modify its source code and install from folder, because it copied the diff editorās left (original) content instead of the content on the right (which is what you usually need), and also I removed the logic to remove comments and new lines. I canāt post the modified code, as I donāt want to be responsible for distribution of extensions I havenāt vetted, but this is the closest thing I could find which after those modifications works perfectly for my needs.
Update on applying the changes:
I used the outlined steps and then used o1 model to create instructions for the Composer Agent. I then went back to Cursor and pasted the instructions I received from o1, and Composer Agent applied them mostly impeccably. This was just a single experiment, but this seems to work better and more efficiently than my previous workflow with the Long Context Chat.
Thatās a good idea but it seems like this is essentially just using Cline or another plugin to do the editing, not Cursor. Essentially Cursor is just doing the apply. I guess that works because theyāre using the same underlying model.
But this is definitely a work around for sure.
All right, I have to say the composer agent is pretty good for my doc generation use case.
Create an .md file, and ask composer to write the docs you want, searching the code base for context. It can perform multiple searches to build its context.
Having a long context call as the final generation step by the agent would be . Agent could decide if itās needed, allowing them to be efficient with the use of expensive API.
Iāve been mourning over the sudden departure of the long context ever since it has been yanked away like a dirty bandaid. Felt like I lost the best rubber duck I ever had. Not to mention instant regretting the relatively large amount of money it made me invest in cursor
But on the other hand, I too have been impressed by the composer agent ever since. What it lacks in depth and genuine large-ness, gains in eagerness (for the lack of better words) and the ability to gather from multiple searches.
I fully agree, a combination of agents doing their best and long context guiding the solution would bring this home.
Oh manā¦ just pulled the trigger for a yearly subscription just to update and see the long context is gone. Which was my main reason to keep the subscription.
I was about to get Cursor but now I will not, knowing that long context has been removed. I will look for an alternative solution to Cursor.
seriously about to cancel my sub and switch to zed editor for my LLM pairing use cases and back to VSCode in Codespaces for everything else.
This needs to be addressed or they will loose market to someone who does re-impliment that UX
Lets be real here, the main issue has to be that Long-Context chat is just too expensive for them to maintain. Maybe they need to introduce a new higher tier of subscription to accommodate that fact.
I just started using Cursor a few days ago, and today my long-context option is no longer there which has severely degraded my experience.
I will not be continuing to use Cursor unless long context chat remains part of the product offering, since its GPT4o context window is too constrained @ only 10k tokens.
Iāve realized just how much I miss the long context mode and have decided to revert back to version 42.5. Itās a real shame they removed this feature.
From the beginning, Cursorās strength was its exceptional ability to add contexts to LLMs in the best way possible. However, with the latest update, this functionality has only gotten worse.
For my use cases, thereās no real replacement for this feature. I genuinely hope they introduce a viable alternative for handling larger contexts, as nothing works as effectively beyond version 42.5.
If Cursor doesnāt provide a proper solution and other tools do, Iāll have to consider switching (subscription still going for a few months), even if Cursor excels in other areas. I was a happy customer from the start, at least until now.
But hopefully a lot can change within a few months
bump. please add this back
After a month of trying the latest cursor to work with my workflow today I gave up and downgraded to v0.42.5 with long context.
If you want to do the same here is how: GitHub - peter-mach/aiboost.dev-long-context-cursor-0.42.5
this is so bad, the long context is the reason why i subs, thanks for that, will cancel the subs