O1-preview pricing is expensive

What is the rationale for why we can’t get a couple of free inferences on the paid tier, similar to o1-mini? o1-preview is 5x more expensive than o1-mini, hence 10/5 = 2 free calls per day.

Currently you have to input roughly 26k input tokens to justify the $0.40 per call (ignoring output tokens).

The cheapest way to access this model is with Github Copilot. You can get 10 call per day. Just for that, it might be worth to pay the $10/month. You can install copilot in Cursor.

Nice one, thanks. Does it integrate into the cursor chat too?

No, it only works in the latest version of VS Code.