Inspired by:
So, OpenAI’s GPT-5.2 has been released, why is it still only available in the context of 272k, despite the fact that OpenAI has improved the behavior of the model with more tokens?
I think a reasonable solution would be to simply transfer the remaining tokens to MAX mode, given that most of the OpenAI models (with rare exceptions) have no differences in abilities when using MAX mode.
Lack of MRCRv2 results for >272k tokens
I noticed that OpenAI has not published information about the results of passing the MRCRv2 model in the context of > 272k tokens. But I suspect that there is no drastic degradation of the model. If I’m wrong, I’ll ask you to correct me, otherwise, I think it’s possible to entrust the choice in this case to the user, whether to use an additional context window or not.

