The update frequency in-app got much better, thanks!
The update UI is still asss ![]()
The update frequency in-app got much better, thanks!
The update UI is still asss ![]()
Exactly. Agent view is completely useless. Stop going in that direction. It’s going to make us leave cursor. The workflow in 1.6 was almost perfect. You need file view, you need devcontainer, you need terminal.
You need to press the Editor button or buy bigger display. With wide screen it is all there, just arranged differently.
I can no longer use the plugin to run unit tests in Java anymore. The 2.0 update broke it.
Also, I can no longer put summarized past chats in the agent context. This is sad.
Also again, I’m not able to disable this TODO feature.
Stop removing things that work and are useful. This does not make sense
D: Its gone for me

Is that screenshot after you made an initial request? I think the context icon only appears after a request has been made in the chat.
Wait it appears for me if I make the chat window wider, I guess the threshold for when it appears just got larger than what I used to have. I am saved!
Okay, it seems to disappear if you make the box narrow enough. And when it does, there is no way to get it open other than widening chat.
That’s annoying, especially when there appears to be enough room. Cursor team should limit the width of the chat if it starts hiding critical components.
why would i have say 3 different models all doing the same thing? Is that not just an exercise in token burn? Can someone tell me how this works with git worktrees or cloud or the actual use case of having agents work in parallel?
That, or add menu button to the side which will open menu with hidden things. It is quite standard way to handle the situation in other programs.
Filed as well. Thanks!
Is the @-web feature deprecated? How is one to have a model search for context online in Cursor 2.0?
Cursor 2.0 is the new Windows ME/Vista/8.
People are downgrading and getting frustrated.
Please, move to Cursor 3.0 quickly, based on 1.7 - No fancy stuff, just fix what was wrong in the product that people was learning to love, and make it more accurate.
I wouldn’t bother; Rules aren’t so much “rules” as they are ‘things I’d like to pretend are being respected by AI, but actually aren’t’ being followed at all’.
Want me to prove it?
Add a rule that says “never use personal pronouns”. Feel free to make the rule as specific or general as you like. You can even ask the AI to help write the rule. And guess what?
AI will never respect the rule 100% of the time.
I don’t know about you, but a rule is always respected. Not some of the time….all the time.
If you can’t follow a simple rule all the time, why should i trust you follow a complex rule some of the time?
**Amazing!**
Honestely for my usecases the new inhouse composer model is amazing!
The fact that it runs so fast makes me a lot less distracted and much more productive.
That alone is already worth a lot of money. When it comes to the performance I am getting better or same quality as the other top of the line models.
I do think the agent view is useless for most people and the multi model function as well probably just to get people to burn more money.
In my opinion these are extra functions for a small part of the users.
Just keep focussing your own in house model, making it faster, better and more cost efficient. In my opinion that’s really the only thing cursor should do in order to win!
Good luck to the team!
Jonas
I don’t think that’s a realistic expectation even for humans. The rules aren’t for an LLM to adhere to all the time, it’s for providing a framework where I don’t have to type every granular instruction for boilerplate behaviors like unit tests and build commands.
That being said, I’ve just created an /init command using Claude Code’s prompt for it back in 1.0.3 if anyone’s curious about solvency.
Not realistic for humans? Poppycock. In fact, the person composing this message has completely composed the entire message without a single personal pronoun.
Information lends itself nicely to this directive, in fact. That’s a significant quality of information: it’s objectivity. Only in the introduction of personal bias, opinion, or subjectivity is a personal pronoun required.
Considering AI (insofar as a software engineer may interact with the technology) is constantly reporting about existing information related to finite DSLs…the presence of personal pronouns is remarkably superfluous to the exchange of information.
Not only that, the faux personification of a technology’s output is wasteful and harmful. There are plenty of examples searchable that point to this objective fact.
No more information need be shared about this shortcoming.
See? It is possible
BWAHAHAHA! Well I dont know both. lol. It should’ve been in English. lol. Wasted my credits!
They are LLM, not real general intelligence. They have not been trained to speak fluently without using pronouns, and its prioritizing valuable output while trying to adhere to the rules, it will break rules if it thinks they degrade the output too much, and try to correct your request intent. This is their trained behavior even if you don’t like it sometimes. I have made requests where I said something incorrect, but it correctly ASSUMED what I was trying to say. Yea, there is a risk there, but they are trained to try to understand what you are saying not take what you say verbatim. It’s a double edge sword, but it is what it is. Just be a lot more clear, and check their code. They will go against their assumptions of your intent, if you are more clear and direct. This is where rules can be helpful.
Understand the tool and figure out how to get the most of out it, or you can just wait around refraining form using it until its “perfect”. These models can obviously follow very complex rules, as long as they are in line with the samples they have been trained on. Talking in a non-human way is something these models were heavily NOT trained on. So sure, asking it to behave the exact opposite of its training will cause it to make mistakes… or you could just have the model trained on communicating clearly, solving problems, and programming, you know communicate clearly, solve your problems, and produce code.
Also I already debunked your weird litmus test: