It’s obvious by now that we need bigger output windows, or at least and option to hide and not count tokens as output. Models like R1 get stuck replying, should be an easy update
R1 is actually timing out, we dont limit how many tokens it can output, but we are hoping to increase this very soon!
1 Like