Request for Access to GPT-4o Long Output Model

Hello everyone,

I recently came across an experimental version of GPT-4o called GPT-4o Long Output offered by OpenAI. This model can generate outputs with a maximum of 64K tokens per request, which is significantly higher than the standard models. I believe this could open up exciting new use cases for longer and more detailed completions.

Details about GPT-4o Long Output:

  • Model Name: gpt-4o-64k-output-alpha
  • Token Limit: Up to 64K output tokens per request
  • Pricing:
    • Input Usage: $6.00 per 1M tokens
    • Output Usage: $18.00 per 1M tokens

Link : https://openai.com/gpt-4o-long-output/

4 Likes

Just adding 4o pricing for comparison in case it saves anyone some time, as I found myself researching related things to self-educate myself.

I couldn’t find specific official docs on the max token output, the models page just says 128,000 token ‘context window’. Various unofficial, anecdotal posts I have come across suggest it is 4,096.

Details about GPT-4o:

  • Model Name: gpt-4o
  • Token Limit: ?? output tokens per request
  • Pricing:
    • Input Usage: $5.00 per 1M tokens
    • Output Usage: $15.00 per 1M tokens

Link : https://platform.openai.com/docs/models/gpt-4o

OpenAI Pricing Page:

https://openai.com/api/pricing

1 Like