Why is gpt-4o considered a "fast" request?

Isn’t gpt-4o now cheaper than the models used for the gpt-4 “slow” requests?