Cursor vs. Cody vs. Copilot
Hi, all. An associate and I are working on Flutter projects. I use VSC + Copilot (for autocomplete) and Cody (for chat and its powerful context support like understanding the relationships between multiple projects in a workspace).
I’ve used Copilot since release, and Cody since December 2023. Copilot is solid, if underpowered. Cody is amazing and my standard for what an AI assistant should be.
My coding buddy is trying Cursor. I gave it a shot back in March and didn’t see any obvious advantages over Cody. I’m trying it again… and still don’t see the advantages.
Copilot is weak, but I like its feature-sparsity when it comes to unobtrusive autosuggestions (less fiddly than Cody; Cody has so many editor features that it’s almost gone overboard - feels unwieldy).
Overall, Cody is the best tool I’ve used for codebase comprehension (again: it understands relationships between projects in a single workspace - I can’t overstate the value of this).
After a couple days with Cursor, I just don’t see what the big deal is. I came here and read reviews. One that drew my eye was a “Cursor is 100x Cody” thing. Made no sense. Person couldn’t get Cody running after a week of trying, which I file under “user error.” If you can’t get an extension running, you have problems an AI assistant isn’t going to solve.
Features like “tab” autosuggestions are just basic expected features of an AI assistant. Head to cursor.sh’s “Features” page (Features | Cursor - The AI-first Code Editor), and “Tab” is at the top… as though it’s unique to Cursor. Or special. Or better in Cursor. Is it?
Right now I don’t see Cursor’s advantage, especially at 2x the price. And having to commit myself to a VSC fork feels like vendor lock-in. Will the Cursor team keep up? Is it abandonware in the making? (“Trust me, bro” isn’t good enough where this is concerned.)
So… Cursor… I don’t get it. I could use feedback from Cody → Cursor converts (not so interested in Copilot users, as Copilot is more hype than utility). Not trial users or free tier. I mean those of you who use Cursor for production code and have experience with multiple AI assistants.
I had Perplexity generate a feature matrix. Please suggest corrections where this matrix is inaccurate. The point here is to understand advantages/disadvantages at a glance, as well as help me see if I’ve missed anything while testing Cursor:
Feature | Sourcegraph Cody | Cursor |
---|---|---|
Chat Interface | Yes | Yes |
Code Autocomplete | Yes | Yes |
Multi-line Completion | Yes | Yes |
Natural Language Edits | Yes | Yes |
Tab Autosuggestion | Yes | Yes |
Codebase Understanding | Yes | Yes |
Error Correction | Yes | Yes |
Documentation Generation | Yes | Yes |
Integration with IDEs | VS Code, JetBrains, Neovim | Standalone (VS Code fork) |
Custom Commands | Yes | Limited (based on available info) |
Debug Code Assistance | Yes | Yes |
Privacy Mode | Yes | Yes |
Custom API Key Support | Yes | Yes |
LLM Support | Multiple (Claude 3.5, Claude Opus, GPT-4o, Gemini, etc.) | Limited (GPT-3.5, GPT-4) |
Web Interface | Yes | No |
Multi-repo Context | Yes | Limited (based on available info) |
Ollama Support | Yes (experimental) | No (based on available info) |
Pricing (Pro tier) | $9/user/month | $20/user/month |