Ran into a frustrating little bug with the prompt editor. A feature request Iāve posted before (although, possibly as a bug) was for the CONTEXT REFERENCES in a prompt, to be included in the copied prompt. Some kind of serialization of the necessary info, so that once pasted again, if in the same workspace, the contexts could be restored in the new prompt. I often copy prompts, because its the only way to reference rules, and I have found that rules have to be concretely referenced if they are even going to have a chance. (There is another issue with Grok, that Iāll get into below, with regards to rules.)
So I often have these prompts that have some unique details at the top, then common details at the bottom. So I copy and paste them, then reattach the context I need. Cursor 1.6 seems to have a bug when attaching context now, however, that is ā ā ā ā ā ā ā ā up parts of the prompt. See video:
With Grok, I am usually left wondering if it is in fact, following rules at all. VERY OFTEN, when I attach rules, it seems to do a search for them, and the search ALWAYS ends up dropping a tool response panel, one liner, that says: Rule not found. The thinking blocks seem to indicate it knows that a rule was specified, but, I honestly cannot tell if it is in fact actually following the rules or not. Sometimes, it is overtly clear that it is not. However at other times, I honestly cannot tell if it is just a matter of coincidence, if if SEEMS to do things that, at least mostly, follow my rules? A lot of the time, though, it seems like its got some kind of historical reference from, perhaps, just past chats, that it sorta gets things that are often problematic, correct.
A key exampleā¦committing. I have the agent commit after blocks of work. When the model IS following my rules, the process is pretty smooth, as my rules make sure the model and agent do everything required to commit successfully and within the requirements of my LeftHook configuration (which runs a bunch of pre-commit checks that can kick back errors.) When the model is NOT following my committing rules, then it will usually have problems. One, it will often commit much more arbitrary messages, when I have a very specific commit message format and structure I require in all commits. Two, though, when left-hook kicks back errors (usually linting errors, but cna also be build errors or a few other things), the agent/model will ALWAYS forget to re-stage the fixes, which often then get left behind, uncommitted, while other changes do get committed. There are also some other things about my rules that keep the agent from doing any git operations under certain circumstancesā¦such as if it identifies Iām in the middle of a rebase (Iām a rebase feind, often go back in my active work histories and work on code in the rebase to keep certain changes in a certain order) and NOT do any git work via the agent at all, etc.
Sonnet has a veryā¦RICHā¦integration with the agent. It has useful feedback in many ways, so you know when its searching the web, and that its searching the web correctly, that its referencing @Docs, and between certain blocks of coding work, it keeps you updated, and when its trying to solve a problem, there is a bit of feedback that helps you understand the problem and what its doing. I originally liked Grok Codeās āJUST GIT ER DONEā approach, and its great when things are going well. However, given the apparent lack of consistent rule application, and its sheer speed (which I do love) it is often hard to just use the feedback in the thought cycles to keep an eye on what its doing. When Grok has a problem solving a problem, without any non-thinking feedback, it is often hard to understand what is going on, and figure out if its approach to solving the problem is even correct (often not, hence WHY it canāt solve the problem.) So far, 1.6 has been a significantly better release than 1.5. VERY THANKFUL FOR THAT, BTW! Vast improvement there. Now that Cursor is in a more stable place, I am really hoping you guys could perhaps, put a bit of time towards deepening the integration between the agent and Grok Code. It does not seem to support @Docs or image context at all right now. Its @Web search capabilities seemā¦.suspect, at best, often I get a web search results box that seems to have NOTHING to do with any actual web searchesā¦its more of a āI understand the user has this question or problem, and thus and such and somethingā¦ā However there is no indication the darn thing ACTUALLY searched the web, and often it wonāt be able to solve the problem at hand, because it doesnāt have the relevant knowledge. In contrast, switching to Claude, and asking the same thing,
PERFECT SOLUTION. Or at least, a perfectly viable solution with correct application ofā¦well, knowledge from @Web searches (which are clearly displayed), or knowledge from @Docs (clearly displayed), etc. Grok Code needs a deeper, tighter integration with the agent. ITs speed is amazing, and generally speaking Iām satisfied with the quality of its codeā¦but when it comes to solving problems, it is very difficult to make sure the model has the KNOWLEDGE it needs, from @Docs, @Web, images, etc. And lacking correct knowledge, the final outcomes are not as good as Sonnet.
(FWIW, if I donāt give Sonnet all the necessary knowledge and context, its final outcomes are not that much better either!)
==============
Cursor version:
Version: 1.6.26
VSCode Version: 1.99.3
Commit: 6af2d906e8ca91654dd7c4224a73ef17900ad730
Date: 2025-09-16T17:12:31.697Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0