With @workspace agent which in some ways is more appealing than your solution of @'ing every file we need. Copilot chat kind of figures out what it needs and gets it for you.
They’re adding docs support shortly.
They have gpt4.
They have inline code changes and lint fixes via AI.
AI commit messages.
AI pr messages and descriptions.
Is it possible to keep up momentum here? Are there many other tricks in your bag?
Cursor still has the edge for me, and I’m a happy paying customer, but i worry a bit that copilot will catch up soon and I get copilot for free as an OS maintainer.
It’s a good point. I did know about the ctrl+enter search codebase feature. I tend to use it for issues like, CI or linting setup issues or something that’s effecting the entire codebase.
I didn’t realise think to use it like this, and I wasn’t aware of the Reasoning Step. The new docs look great. I’d love to see some more advanced information on there, like what Reasoning Step actually does, as it’s probably not immediately obvious to most users. It makes sense now that you’ve said it.
In hindsight you’ve given us the best of both worlds - we can let AI try and gather the context required, or we can add it manually. I know llm’s struggle at higher context levels, so I imagine the manual @ option is probably going to produce better results at times.
Cursor is a fantastic product. I’m rooting for you and I’m excited to see more
this is interesting topic for me… i just cancelled copilot because it lacks context… It seems we are in the 2nd wave now where its all about context so if git hub can do that and do it better to the point its worth paying ill come back
Debatable though isn’t it?
This is only one example.
It’s hard to make out the screenshot, but even for this example - some might say the copilot version is simpler because it separates the populations variable that get passed into the function. I potentially prefer that. Simpler is subjective. The Cursor version is shorter, but not certainly simpler.
This is probably not a great example either way to be honest, as they’re both using gpt-4 and being sent the same context. I assume copilot chat is probably a smaller model based on gpt-4 to deal with code specifically, whereas Cursor uses gpt-4 full model as they don’t have the same kind of access as MS to the model, but correct me if I’m wrong.
The main difference between the products at this point, as far as I can tell, will be:
the additional features either brings - both have various additional features over the other, which are changing over time.
the way they handle context. Which is best at sending the correct context to GPT-4.
As an open source contributor (getting copilot for free). Cursor at $20/month for an inferior version of VSCode (speaking of it as an editor, outside the added AI integrations), and a subjective ‘little better’ speed/answer quality is hard to swallow.
I’m in year subscription of cursor and also open source contributor who get free copilot, I don’t want to debate as it’s meatless, what I want to shared is just in my option, cursor return is better in my case.
Have you seen the recently uploaded YouTube video of Visual Studio Code? Unlike the demo video, it performs significantly worse in real use and doesn’t provide the same experience as Cursor. This direction of VS Code and Copilot is competition and a threat to the Cursor team, but I think it’s good competition.
However, both of these rely heavily on the GPT model, so I think the strategy going forward will depend on the evolution of OpenAI’s GPT and other LLMs.
I was using Cursor for quite a few months, but actually finding copilot chat to be very good.
In some ways better than Cursor, in others not as good.
I mention that I get copilot for free, but actually I wasn’t personally paying for Cursor either, so money not really a factor here and I’m preferring copilot currently.
I agree competition is good.
I wondered if the Cursor team felt cheated that many of their features made their way into copilot, but on the flip side of it, 90% of their product was built by MS and the open source community, so it doesn’t seem unfair from that perspective.
I generally find Cursor superior to GitHub Copilot in most cases. The @chat and /edit functions in Cursor are incredibly useful.
I’ve noticed that inline suggestions from Copilot sometimes have indentation issues. In Cursor, though, I often get unnecessary “```” at the end of suggestions.
Despite this, I still prefer Cursor for most tasks.
The main issue, though, is the limited number of fast requests on Cursor, and the lack of choice between fast and slow requests. For simpler tasks like doc-strings or minor changes, I don’t always want to use a fast request. Since Cursor doesn’t give the option to choose, I often find myself switching to Copilot or ChatGPT for these smaller tasks. The inability to use the Cmd+I command for Copilot in Cursor and having to switch to VSCode can be frustrating.
Cursor is great, but the limited number of requests and escpecially the forced choice of fast requests can be really annoying.
Anyway cursor is a beast on his own. If u just put some care in what he understand and what makes it hallucinating it come up with some brilliant solution (at least in python)
Copilot autocomplete on the other hand is neat (sometimes at least)
It’s hit and miss for both Copilot and Cursor. If the commit is small, simple and clear it works fine for Copilot in my experience, like - ‘added styling to the login button’, or something like that.
The cursor uses two phases, in the first phase, the model analyzes your question and enriches it with the necessary parameters, in the second phase it refers to another model
As a result, the responses in the cursor and copilot are different