I often use chat for planning mechanisms and architecture for my system. Using different models and then comparing the results would be necessary to come up with the best solution among those results. Currently, I have to create new chat for every model with the same question
1 Like
Cursor is not really built for trying to do this at any kind of scale, so there’s not a great way to do it besides doing it manually!
Consider using OpenRouter for this workflow: Add credits to your account, test your prompt across multiple AI models, then implement the most effective response in Cursor. This lets you compare outputs and only bring the optimal solution into your environment.
thanks for your suggestion but I need to manually feed current codebase context to chat as well, so sticking with cursor chat is more convenient tbh.