Often it helps to write the plan into a .md file, AI can do that. and you can ask it for implementation steps. but agree otherwise also with @mikes-bowden .
No major issues, minor hickups only.
Often it helps to write the plan into a .md file, AI can do that. and you can ask it for implementation steps. but agree otherwise also with @mikes-bowden .
No major issues, minor hickups only.
Honestly, Cursor doesnât really care what the community posts here anyway. You just need to look around the forum, the majority of users are dissatisfied with the recent developments.
In 0.47, thereâs now the addition that you have to pay more credits for an allegedly larger context window. For Sonnet 3.7 thinking, you can end up with 3 credits per request. Currently, you donât know what to expect when working with Cursor - you might be lucky and get good output, or you might be unlucky. But what you can be sure of is that youâll ultimately have significantly fewer than 500 fast requests per month.
I have to disagree. Like said, i had also issues, perhaps not all as severe as others but still had issues. Yes when 3.7 was introduced Cursor had to figure out what the model does how and why, then adjust their internal handling which lead to v0.47.
While im also a Customer and sometimes would like faster/better support i have come to see in the forum that the Cursor team responds to issues where there is some info they can use to reproduce issues or identify cause. You can imagine that from tens of thousands of users only few report issues and almost all that have no issues dont post great thanks in the forum.
Large context window isnt new i think, it existed in 0.46. You would really need the large context if your project has issues like overly large files where a code block (function etc) is so large that it doesnt fit in 500 lines which is not ideal even for human programmers. (I havent used large context yet)
In 0.47 the handling for 3.7 thinking is so much better and consistent. It does much better planning than other models and doesnt cause later issues. However that works only with the right prompt and you should add that it must think in detail about the thing you want it to do. which is an advice that makers of 3.7-thinking (Anthropic) list on their docu pages. With this its absolutely worth double as it saves more on prompts later.
But 3.5 regular works just fine as well and costs 1 requests which is so great as it still beats other non claude models like gpt-4o etc.
For small checks, or verification steps i use a smaller model like -mini which is 1/3rd request and that saves also cost often.
Its important to use the right processes and tools for the right purposes. Yes knowing how to prompt is a must as vibe coding can only get you a short distance and then everything becomes a mess.
Interested; how do I do those things?