Is anyone seeing a distinct change in the r1 model vs a couple of days ago?
The reasoning aspect of the model has drastically, for a lack of better words, become dumber and the results in turn are way worse.
Before it would reason with itself for a while before either timing out or implementing the changes.
Now it seems its acting closer to v3 model. I was having excellent success with r1 but as of yesterday and today, its visibly worse.
anyone else experiencing or seeing a difference?
1 Like
I’ve noticed this, especially since yesterday.
I’m not sure if it’s because they changed API providers, something changed in the context, or something like that.
In parallel, I’m trying alternatives, directly with APIs in other providers outside of Cursor, and I see that it responds better, although I really don’t want to abandon the use of Cursor, I hope it gets fixed quickly.
As an additional note: Also integrating r1 in agent mode and architect/polyglot mode would be important, since other alternatives have already implemented it, it would be good if they had some kind of public roadmap, as well as notes a little more specific about what changes in each version that they release daily.
1 Like