Do the experimental "long context" option make the coding result better?

Did you check this option?
I suppose the cursor uses the context window size lower than regular Claude chat or API requests. So probably this option should make the coding experience better.
What do you think? What is your experience?

Works generally better for me than Normal chat for larger files or when I attach many files as context. I also don’t have to worry about reaching the limit. For simple tasks with small context size though I suppose it doesn’t matter. I wish it was possible to switch to Long Context Mode whenever the limit is reached in Normal Chat, this would save some usage-based requests. However, despite mostly only using Long Context, I usually just overpay around 5$ a month for additional requests (when I use more than 10 Long Context requests per day). If you are using your own API key then it makes sense to always use Long Context Chat I suppose, since you’ll be spending the same amount of tokens until you reach the limit, but if you do reach the limit nothing will be excluded from the context (until you reach the LLM’s limit)