AI's critique each other

Would love to have Deepseek, ChatGPT, and Claude argue and critique each other for generated code and human pick the best if the AI’s don’t agree.

I’ve done a bunch of tests with o1 and r1 picking each other’s output apart and it I really like the outcome’s. An extremely useful tactic for sure.

I take it Cursor does not allow me to swap between models?

Hey, while you can’t necessarily get them to compete directly, you are able to switch models within the same conversation.

Hypothetically, you could make a situational test, where you first get Model A to respond, switch to Model B and say “The attached conversation history is from a chat between me, the user, and Model A. Evaluate it’s answer, and correct any errors”.

Might not be exactly what you are hoping for, but food for thought on what is possible inside Cursor.