As an avid user of cursor.ai and someone who closely follows developments in AI and natural language processing, I believe adding GPT-4o would significantly enhance the capabilities and user experience of cursor’s service.
GPT-4o represents a major leap forward in language model performance. Its advanced architecture and training on a vast corpus of data enable it to engage in open-ended conversation, answer follow-up questions, and assist with an impressive range of writing and analytical tasks. Cursor.ai has always been at the forefront of providing cutting-edge AI tools, and I think GPT-4o would be a natural and exciting addition to your offerings.
Some key benefits of GPT-4o that I believe would bring great value to cursor.ai users include:
Improved language understanding and generation across a wide range of domains
Enhanced ability to maintain coherence and context over long conversations and complex, multi-step tasks
More nuanced and insightful responses that draw upon the vast knowledge the model has acquired
Stronger performance on analytical and reasoning tasks like strategic planning, code explanation, and mathematical problem-solving
I know that incorporating a powerful new model like GPT-4o is no small undertaking, but I’m confident it would pay dividends in terms of increased user engagement, expanded use cases, and positioning cursor.ai as the go-to platform for the most advanced publicly available language model.
Please consider this request to introduce GPT-4o, and let me and the rest of the cursor.ai community know if there are any plans to add this impressive model to the platform. I’m excited to see cursor.ai continue to evolve and push the boundaries of what’s possible with language AI.
opus are much slower and very expensive! Encouraging the see the high score gpt4o banged on coding so it would be most interesting to try it.
via api I have seen gpt4o to be enable to keep long coherence up to 50k context length.
Sad but almost expected somehow… Given its smaller size, there are probably trade-offs in performance when handling challenging, “unseen” cases where the model needs to generalize.
I think the most interesting aspects of this model are not its performance but its price, speed, and real-time multimodal capabilities.