Cursor v1.6 - Release Discussions

I switched to CMD from PS. With this 1.6 it has been working super great.

Add this to settings.json either global or project level.

{
    "terminal.integrated.defaultProfile.windows": "Command Prompt",
    "terminal.integrated.profiles.windows": {
        "Command Prompt": {
            "path": "C:\\Windows\\System32\\cmd.exe",
            "args": []
        }
    }
}

1 Like

In Android Studio can link cursor CLI?

Maybe I am missing a new easy way to do this, but adding terminal text into chat context feels clunkier, and I am not seeing the add to chat buttons.
The agent (with both opus 4.1 and gpt-5) also just seems to jump to conclusions about what its commands do.
It feels like its in flux, which is understandable, but it feels like its all or nothing as far as what the agent is able to do with the terminal right now.

i also don’t love that, sometimes, the agents commands open a new terminal, and sometimes stay in the one it had been working in.

New terminal vacuums!
No colors.
No formatting.
No replaying.
No typing access.

The issue with Win terminal is stil there. The only change is that before update it was constantly adding ā€œcā€ before the command, and now it’s adding ā€œqcā€

3 Likes

2 posts were merged into an existing topic: MCP Elicitation with HTTP OAuth hangs with request timed out error

Why do different account settings vary?

cursor 1.6.26

Still no Github integration :sob:

2 Likes

I’ve noticed the same behavior where it will fail to actually edit a file, but it thinks it successfully did, then requiring me to prompt it once more to tell it to try again with its edits since they did not work the first time.

Yes, you can use cursor CLI from the terminal directly in Android Studio :slight_smile:

For the agent run issues, could you share a request ID where you are having problems?

I also face the same problems with teminal. I also face the same problems with teminal many times every day. I have to manually remove agents from teminal.

I’m building mcp server in typescript for some time, and one of my tools is used to show the user download/view links that should be clickable in the agent tab. I achieved this by doing so with the following skeleton code, and it worked, the download link was clickable in gemini/gpt5/sonnet4.
Today I installed version 1.6.26, and the links are no longer clickable in any model.
How do you do it now? Is it a bug?

@condor I have been having this issue a lot lately. I actually had it as far back as 1.4.6, which I’ve reverted to a few times since 1.5 was so unstable. I’ve had it with 1.5, and I think I’ve seen it a couple of times with 1.6.

This one is a really huge problem. I don’t know what causes the agent not to save…it is not totally consistent. Lot of the time it does, sometimes it does not, and once a file enters the unsaved state once, it is highly likely that the agent will not save that particular file again down the road.

As previously mentioned, this REALLY screws the pooch on builds run by the agent, as the builds simply WILL NOT work, NOT can the agent resolve the issue on itself, as it will look at the file, see its latest updates, and be perpetually confused as to why the build is failing. The dev is also not necessarily going to be readily aware this is the issue, without noticing the pattern and finding the file(s) that have been left unsaved and try to save them manually. The agent can really BURN tokens trying to resolve issues like this, as I’ve stepped away to get a coffee or a snack, come back, and the agent was spinning its wheels for 5-10 minutes just trying to resolve a build issue that was actually resolved 100 tool interactions prior.

I have seen somewhere, that gpt-5-codex should be available (@danperks ?). I did not see it, tried the model reload button, and tried to add this string and others, they all don’t work. Any comments?

Hey! It’s coming soon. Sorry for any confusion

2 Likes

@andrewh @condor It looks like the scrolling bugs are still present in 1.6. This has been an issue for a good while now. Panels with scroll ā€œpop to the topā€ when you switch to them, and sometimes under other circumstances. This is mostly an issue in the agent…the agent chat scroll itself, as well as the prompt input. If you switch tabs, then go back to a previous tab, all the scrolls pop to the top. If you have longer chats, it can take a good while to scroll back down to the latest output.

With the prompt input, there is even more quirky behavior. If you are entering a prompt, then paste something in, while the viewport will remain at the bottom of whatever you pasted in, the scroll position actually moves back up to the top! This is even evident in the scrollbar position. It seems quirky exactly what will happen next…if you try to scroll, it will often start from the scroll position, but not always. If you use the text cursor to move around, though, it starts from wherever the cursor is at.

Since I’m on the subject of the prompt input…this really needs some love. It has poor keyboard navigation support. Well, heck, since the scrollbar thumbs disappear so quickly, it has poor navigation support in general. I often work with bigger prompts, and getting around them is a real chore with the way the prompt input is currently designed. If I have a longer prompt, usually, the only way to get around in it is the up and down arrows…and, only the arrows. Paging up or down do not work. Trying to jump up by many lines at a time or any of that, does not work. If I have a long prompt, and need to go back to the top, I usually have to try and grab hold of the scroll thumb, which itself is an issue, because the DARN THING WANTS TO HIDE! So getting around the prompt input is a monster chore and time waster. The ā€œpop to the topā€ bug makes it even more quirky and weird.

I’ve mentioned before that the terminal also seems to have the same scrolling bug, but what triggers it is different. I don’t know exactly what does, however if I run certain commands that result in a certain amount of scrolling, then try to interact with the terminal (i.e. to select text or add text to the chat, or even to try scrolling up just a little bit), the terminal will pop to the top. I have a 5000 line scrollback history (I usually prefer 10k, but cursor does not seem to handle that well), so when this happens, its a real pain to get back down to the bottom.

In a general sense, for things that you generally work with at the ā€œbottomā€, the default scroll position, or the fallback scroll position, etc. should be the bottom, not the top. However, I think that even doing that, might be a problem, and I suspect something odd is occurring in the code that causes all of these panels to scroll back to, well, I assume scroll position ZERO (0). That just shouldn’t be happening. Ideally, the scroll position of any given panel would just be preserved, and nothing would change it except the users direct interactions.

1 Like

@andrewh @condor
???

2 Likes

Ran into a frustrating little bug with the prompt editor. A feature request I’ve posted before (although, possibly as a bug) was for the CONTEXT REFERENCES in a prompt, to be included in the copied prompt. Some kind of serialization of the necessary info, so that once pasted again, if in the same workspace, the contexts could be restored in the new prompt. I often copy prompts, because its the only way to reference rules, and I have found that rules have to be concretely referenced if they are even going to have a chance. (There is another issue with Grok, that I’ll get into below, with regards to rules.)

So I often have these prompts that have some unique details at the top, then common details at the bottom. So I copy and paste them, then reattach the context I need. Cursor 1.6 seems to have a bug when attaching context now, however, that is ā– ā– ā– ā– ā– ā– ā– ā–  up parts of the prompt. See video:

With Grok, I am usually left wondering if it is in fact, following rules at all. VERY OFTEN, when I attach rules, it seems to do a search for them, and the search ALWAYS ends up dropping a tool response panel, one liner, that says: Rule not found. The thinking blocks seem to indicate it knows that a rule was specified, but, I honestly cannot tell if it is in fact actually following the rules or not. Sometimes, it is overtly clear that it is not. However at other times, I honestly cannot tell if it is just a matter of coincidence, if if SEEMS to do things that, at least mostly, follow my rules? A lot of the time, though, it seems like its got some kind of historical reference from, perhaps, just past chats, that it sorta gets things that are often problematic, correct.

A key example…committing. I have the agent commit after blocks of work. When the model IS following my rules, the process is pretty smooth, as my rules make sure the model and agent do everything required to commit successfully and within the requirements of my LeftHook configuration (which runs a bunch of pre-commit checks that can kick back errors.) When the model is NOT following my committing rules, then it will usually have problems. One, it will often commit much more arbitrary messages, when I have a very specific commit message format and structure I require in all commits. Two, though, when left-hook kicks back errors (usually linting errors, but cna also be build errors or a few other things), the agent/model will ALWAYS forget to re-stage the fixes, which often then get left behind, uncommitted, while other changes do get committed. There are also some other things about my rules that keep the agent from doing any git operations under certain circumstances…such as if it identifies I’m in the middle of a rebase (I’m a rebase feind, often go back in my active work histories and work on code in the rebase to keep certain changes in a certain order) and NOT do any git work via the agent at all, etc.

Sonnet has a very…RICH…integration with the agent. It has useful feedback in many ways, so you know when its searching the web, and that its searching the web correctly, that its referencing @Docs, and between certain blocks of coding work, it keeps you updated, and when its trying to solve a problem, there is a bit of feedback that helps you understand the problem and what its doing. I originally liked Grok Code’s ā€œJUST GIT ER DONEā€ approach, and its great when things are going well. However, given the apparent lack of consistent rule application, and its sheer speed (which I do love) it is often hard to just use the feedback in the thought cycles to keep an eye on what its doing. When Grok has a problem solving a problem, without any non-thinking feedback, it is often hard to understand what is going on, and figure out if its approach to solving the problem is even correct (often not, hence WHY it can’t solve the problem.) So far, 1.6 has been a significantly better release than 1.5. VERY THANKFUL FOR THAT, BTW! Vast improvement there. Now that Cursor is in a more stable place, I am really hoping you guys could perhaps, put a bit of time towards deepening the integration between the agent and Grok Code. It does not seem to support @Docs or image context at all right now. Its @Web search capabilities seem….suspect, at best, often I get a web search results box that seems to have NOTHING to do with any actual web searches…its more of a ā€œI understand the user has this question or problem, and thus and such and somethingā€¦ā€ However there is no indication the darn thing ACTUALLY searched the web, and often it won’t be able to solve the problem at hand, because it doesn’t have the relevant knowledge. In contrast, switching to Claude, and asking the same thing, :collision: PERFECT SOLUTION. Or at least, a perfectly viable solution with correct application of…well, knowledge from @Web searches (which are clearly displayed), or knowledge from @Docs (clearly displayed), etc. Grok Code needs a deeper, tighter integration with the agent. ITs speed is amazing, and generally speaking I’m satisfied with the quality of its code…but when it comes to solving problems, it is very difficult to make sure the model has the KNOWLEDGE it needs, from @Docs, @Web, images, etc. And lacking correct knowledge, the final outcomes are not as good as Sonnet.

(FWIW, if I don’t give Sonnet all the necessary knowledge and context, its final outcomes are not that much better either!)

==============

Cursor version:

Version: 1.6.26
VSCode Version: 1.99.3
Commit: 6af2d906e8ca91654dd7c4224a73ef17900ad730
Date: 2025-09-16T17:12:31.697Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0