nope, its not increasing, still free
Hey, I just checked, and it’s counted as a free request for me. Can you check your recent usage events?
My mistake! I think I am using Sonnet for the command line or other stuff. I use 4.1 now for all requests, and it is not deducting from my points.
I like that it asks and gives you a plan, other models modify things even if one asks not to do so. I think 4.1 is a nice model, I have been using it and I think is powerful. It plans really well what it needs to do, however, there is not a perfect model, but for sure most of new models are better than the other ones, little by little we will have even more powerful options.
I’m honestly impressed the way he follows rules looking like a puppy on a leash is impressive It’s like you have a super intelligent being following exactly what you ask
I keep complaining that the course limits the context window to 60,000 tokens but yesterday after a long conversation it has the tokens of the conversation with him
So I could see that there were 95 thousand tokens in the conversations, that is, my question is: can it reach 1 million yes or no tokens
Or it will be like the other models that have a rolling window of 60 thousand tokens only if it manages to reach its maximum capacity Congratulations you have solved a problem that you have been asking for centuries to be solved
GPT-4.1 currently has a context window of 128k tokens. We’re considering adding a Max mode as well!
Yes but jaminai 2.5 has a millionaire context window and it doesn’t activate in the course unless I use Max why does that happen
4.1 was working really well. It follows instructions far better than claude, and unlike claude it wasn’t randomly forgetting my rules files midway through a request.
It suddenly stopped being able to edit any code about an hour ago though. It now only gives me suggestions on what to do and each time I ask it to make an edit itself it says that it is unable to do so because of technical limitations.
What changed? Or did I hit a usage limit that I was unaware of?
I’ve got the same issue so you’re not alone, sometimes they modify something and Cursor needs to ‘patch’ it on their side, for now, open related files and click apply, in my experience gpt4.1 is great while its free, I’m using it for details or when not needing high intelligence, Gemini is on another level, Claude is perfect for tooling but has a strict context, they really need to push the limits with Claude 4
I’ve found that 4.1 picks up on language subtleties that indicate you want it to explain an answer vs. implement it
for example, if you prompt “I need to move @Component into a modal” it will often explain how, where a model like claude-3.5 will simply do it
on the other hand if you prompt “move @Component into a modal” it will do the changes
personally, I find direct instructions work best always
I have spent the last week using GPT4.1 to debug months of Sonnet-3.7 code and also adding enhancements. 4.1 is now by far my favorite model. I love how it behaves, cautious, adheres to all my rules, and excellent as a resource for outside knowledge. Looking forward to using the next upgrade in this model series