Feedback on User Experience with Different Models in Cursor Chat Output

Why does using the GPT-4o model in Cursor Chat feel so smooth when outputting code or text, while using the latest Claude model feels very laggy, especially when outputting text? The code output is also not smooth. This issue has been reported many times - can’t it be optimized? GitHub Copilot now also supports the latest Claude model, but their code and text output isn’t laggy. Could the developers please pay attention to this software user experience issue? Thank you very much!

Is there no developer response?

Does anyone else have the same experience as me?