If there were 3 pricing strategies which would you choose?

Strategy 1:
Current pricing model—some models have max context while others have standard context, with compressed context.

Strategy 2:
All models have full context without compression, and the chat window includes a progress bar so users can monitor context usage to determine when to start a new chat.
However, different models consume different numbers of quick requests based on API costs.

This approach is more transparent, though some models are more expensive—a single conversation with certain models may consume multiple quick requests.

Strategy 3:
Keep the current pricing model but adjust the max-context models to charge a one-time fee (instead of the current per-tool-call pricing).