Stop Limiting output tokens for my own API key

Currently cursor limiting output tokens to around 4000 tokens and that seems to be the cause of duplicate code while applying edits on large files. I understand you might be doing this to cut costs but can we get max model output when using my own API key?

3 Likes

Iā€™m wondering if it also cuts output in the long context chat