Lately, I’ve been experimenting with an interesting approach that I thought might be worth sharing to see if anyone else has tried something similar or noticed any differences.
Here’s what I’ve been doing:
I start by prompting DeepSeek R1 and letting it complete its full reasoning process. Once it’s finished, I stop the generation, copy all of its output, go back to the original prompt, and paste in the < think>…< /think> after the original prompt and switch to a Gemini model. This way, the Gemini model has access to the entire, longer context—including the detailed thinking from DeepSeek R1 and any context I gave it—allowing it to work with a richer and more comprehensive input.
Has anyone else experimented with this?