My experience with slow requests for 20 days now is a very big struggle. I end up clarifying this with the AI itself the time no one is answering here.
I will leave it here with screenshot so you can read it yourself.
If No, so can you elaborate the deference between the fast request output and the slow request output? FYI, the deference is huge. So what is your explanation to that?
Hey, slow requests are always executed with premium models, just like fast requests! There should be no difference in performance or output between fast and slow requests.
For example, here’s how the Claude model responds to me, even though I still have fast requests available: