Feature request for product/service
AI Models
Describe the request
According to benchmarks, GPT 5.5 performs significantly better than Opus 4.7 in long context tasks.
However while the Context Window of Opus has been increased to 300K, GPT 5.5 remains at 278K tokens.
It would be cool to at least increase it to 300K or even to higher amounts since GPT 5.5 still performes reasonably well up to 500K according to benchmarks (“only” 15-35% drop from 512K to 1M depending on the task)
Operating System (if it applies)
Windows 10/11