How to compare multiple models on the same task and review each other’s code automatically?

Hi everyone,

I’m experimenting with using several AI models for the same coding task and I’d like some advice on how to make this workflow smarter and more automatic.

What I’m doing now (manually):

  1. I give the same task to 3 different models.

  2. Each model writes its own solution (its own code).

  3. Then I copy the code from one model and ask the other models to review it and tell me honestly:

    • What is good or bad about this implementation?

    • Which solution is better and why?

    • How they would improve the code.

I always tell them this is not a competition – we’re all working on the same project and I just want the best result and to learn.

What I want to know:

  1. Can this process be automated in Cursor?

    For example:

    • Run the same prompt on multiple models.

    • Collect all their code.

    • Then automatically ask each model to review the other models’ solutions (code review + comparison), without me copying and pasting everything by hand.

  2. Is there any feature, extension, or API that helps with this kind of multi-model workflow?

  3. Where can I learn more about advanced Cursor workflows or prompt engineering that can help me become a better, smarter programmer using these tools?

Thanks a lot for any guidance!

Thanks,

Sam

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.