Feature request for product/service
AI Models
Describe the request
I am not sure if Feature Requests is really the best place for this, as its standard functionality for Claude models. I started out using Claude, and overall really like Claude, except for what now feels like its excruciatingly slow performance compared to Grok Code. I use Grok Code for most things. Its not a perfect model, they all have issues, and I think its more the general nature of LLMs than anything else right now, which will hopefully improve as innovation continues in the space.
Grok Code, for me, with proper rules and other guard rails, works quite well for most tasks. However it fails rather miserably on a lot of things that Sonnet/Opus handle superbly well, and so does GPT-5:
- No support for the Cursor Indexed Documentation feature
- No support for proper web search functionality
- No support for image context attachments
I became quite used to these with Sonnet. Any time I need these, I switch back to Sonnet, from Grok Code, then once the necessary research level prompting is done, I switch back to Grok Code. The MODEL SWITCHING, however, does not seem to be optimal, and even though I suspect the same context is included regardless of the model chosen, it seems that cross-model usage in a single chat doesn’t lead to the best results all the time.
I have in the past relied VERY HEAVILY on use of Cursor’s indexed documentation functionality. I find it is not only much faster than using web searches (which also seem to be very rudimentary in Grok Code and GPT-5 compared to Sonnet), but also much more effective. Sonnet, when using @Docs, usually finds EXACTLY the right information to help it use Library X or Database Y or CLI Tool Z optimally/ideally, and it generates great code or runs commands perfectly as a result.
When using Sonnet, its web searching capabilities with the agent seem far more targeted, explicit, and refined than when using Grok Code or GPT-5. It not only shows you explicitly that it is doing a web search, but it has the ability to react to the results and issue more refined searches until it either exhausts its ability to find what it needs through web search, or finds somethign very close to or exactly what it needs.
Grok Code (and GPT-5, although GPT’s ridiculously lengthy and time-wasting thought cycles generally keep me from using it) really suffers from not having the same capabilities. Docs is the most important for me, but sometimes its not really a documentation issue and more of a troubleshooting issue and being able to do deep, refined web searches like Claude does becomes helpful when you need to find how other developers solved Problem X.
I prefer to use Grok Code, as I am able to generate SO MUCH MORE output with it, thanks to its speed, over Sonnet, even though in many ways Sonnet is a more refined LLM that produces better code in some ways than Grok (which I guess, is why it is still the most expensive model.) I would really like to see the same level and depth of agent integration with Grok Code and GPT-5 that Sonnet has, for web search and docs, however docs would definitely have the priority if I were able to choose. I suspect that some of the web search capabilities, might be because Claude has had that ability for a while now, and it may just be a Sonnet thing. I think that the @Docs feature, though, is basically an MCP (??) and if that is the case then a similarly deep and effective integration, with the same ability to report on the relevant doc topics the model found to indicate it found something useful and is working with it (like Claude) should be entirely possible.
The next bit is image context attachment support. Currently it does not look like GPT-5 or Grok Code, either support images at all (Grok Code definitely does not) or the support is very rudimentary (I’ve had many issues getting GPT-5 to actually understand images I attach and use them effectively…perhaps that is a model issue, but when I try to have GPT-5 and Sonnet both use the same image and same prompt, Sonnet’s understanding of the images is VASTLY superior to anything I’ve seen from GPT-5 yet.)
These missing features from these two new model integrations, definitely hamper the ability to use these models most effectively. Fallback to Sonnet does work, of course, however Sonnet does have the performance issue vs. Grok Code (it was always one of the slowest models), and spanning context across more than one model seems to have its…quirks.
FWIW, I’d put deeper model integration after fundamental stability issues with Cursor, of course. But once the IDE is more stable and some of the recent issues are thoroughly resolved (which seems to be happening with 1.6), it would really be nice to see some QOL improvements for Grok Code and GPT-5.
SIDE NOTE: With GPT-5…is the way it “thinks” just the model itself? Or can its method of thinking, its depth of thinking, the amount of effort and time it spends on thinking, be controlled by the API calls/system prompt? GPT-5 currently, has the most egregious “thought” cycles of any of the models available within Cursor. Despite GPT-5 being faster than Sonnet, I feel I have wasted more time sitting and waiting for GPT-5 to get ANYTHING done, than any other model. For tasks that even Sonnet could finish in under a minute (sometimes under 30 seconds), GPT-5 will spend a minute or more, sometimes MANY MINUTES, just “thinking”…and its thought cycles are pitifully idiotic a lot of the time. I thought I’d read on the GPT-5 documentation pages that the depth of thinking could be controlled via the API, and if that is so, it would be great to see…well, two things: Lighter thinking versions of the model, and NON-thinking versions. Sometimes, well a lot of the time, you JUST DON’T need all the thinking, to get a whole LOT of tasks done perfectly well. Grok Code Fast demonstrates this, with its thinking cycles mostly being 1s or less, with occasional stints up to 2-3 seconds and even rarer 5 seconds. I never feel like I am wasting my time with Grok Code…it gets the work done. I always feel like I am wasting huge amounts of time when using GPT-5, as I spend far more time watching it “think” than “DO” … and it feels so unnecessary.