This week I tried several other IDEs, just to check if I might be missing something.
Of course, every IDE has its pros and cons, some features I wish Cursor would copy, and pricing models vary a lot.
But one thing stood out consistently across the board:
Cursor is simply faster.
I’m not talking about just one model whether it’s Anthropic, OpenAI, or others.
The exact same action, the same code, the same request:
- In Cursor → runs fast.
- In other IDEs → noticeably slower.
Some are twice as slow, and some… much, much slower.
There’s even one IDE that “wins” the race for slowness by a huge margin.
I won’t name names, but let’s just say it belongs to the biggest and wealthiest company the one that actually hosts the models themselves, and therefore should have been the fastest of all.
That makes it even more surprising.
What really fascinated me was this realization:
I used to think GPT-5 was slow.
But when I tried it in other IDEs, I suddenly discovered that it’s actually relatively fast it was the platform making the difference.
If someone got different results, and I just tested it wrong, I’d really like to know.
The part I don’t really understand is how this is even possible.
They’re all connected to the same API, and I don’t believe the companies are giving Cursor faster access.
Or are they?