Since the model is so much cheaper, could we have it available in the long context mode? That would be awesome
yea please add longer context for input and also for output it writes like a function and then cuts out mid way on something normal needs extended output
yes
Seems like the thing to do. Please make it available to those who provide their own API key too
Whatβs the current context length , i feel itβs too short
Since gpt4o is cheaper than sonnet and that sonnet is provided with 10x access each day for pro and business,
It has a hard stop at 10k
Well i hope the embed cursor use is top tier
The 4 Turbo and 4o context length is 20k in chat for now (might change with 4oβs speed)!
What about inline edits(CTRL+K)? Also how much of that 20k is input/output? Thanks
So you updated it from 10k to 20? Huge thanks! Maybe add a context % icon/bar somewhere so the user knows when the context window is filled?
+1 to the question about inline edits
Care to answer?