Given the new 128k, 200k, 500k, etc model context availabilities, but then also there are the default models which can be faster or better for some use cases that have less context, it would be good to have an option we can turn on to let Cursor auto-choose the best model+length on the fly for every request, rather than making the user choose the model context size.
Especially since it’s Cursor internally which knows how many tokens are being used for each query, and doesn’t really present this info to the user, so isn’t it better it decides when to use long context instead of leaving the user guessing when to do it?