Clarifying June 16 Pro Changes

Thanks for updates! Allow me to ask some questions.

For example, Pro includes over $20 of agent model inference at API prices per month

What does it mean? Can it be written with simpler wording?

We lifted all limits on the number of tool calls per agent request.

So there’s no longer internal limit of 200 tool calls?

All individual plans include: Access to Background Agents

(…) few sections in between

Background Agents are charged at API pricing and aren’t included in any plan for the time being.

What does it mean? If it only means “All invidual plans include ability to PAY extra for Background Agents”, can it be included at the top?

Max Mode is included in all plans. Turning it on will pay down your agent limits slightly faster, since the API cost of your requests will go up a bit. The difference should only be noticeable for 1M token context window models.

What does it mean? Provided only Gemini and GPT 4.1 have access to 1M context window.

Does it mean that for other models like Claude 4 Sonnet there should be no noticeable difference when using MAX mode?

Does it mean no noticeable difference between two requests (1 MAX, 1 non-MAX) on the same prompt length? If yes, why is it different for Gemini and GPT 4.1?

Does it mean no noticeable difference between two requests on different prompt lengths, unless they reach close to 1M? (f.e. 1 at 50k context length and 1 at 200k context length)

Thanks a lot!

7 Likes