The idea that AI is “just another tool” and that the real difference is working together — author and AI as a pair, problem-solving, with both sides making mistakes and fixing them — is exactly the kind of thing that makes tools like this useful.
Try giving Opus-4.5 some better tools, like the AuraFriday suite, and watch it totally blow your mind - it’s more like “watch it do your entire job, 10x better, and 10x faster”, than "pair-programming now!
Hey, that’s an interesting take on how AI collaboration works.
If you’re interested in the practical side, like how to structure this “collaboration” in a real way, you can check out this thread: Invest in Cursor Rules: A Four-Level Maturity Framework
It breaks down how rules and context help the AI understand your working style better.
absolutely nonsense, anyone that has a monorepo/brownfield knows that without guidance, proper documentation, and breakdown of tasks there are NO ai models today that can do the work hands-off unless it’s bug finding and reporting.
This must be marketing for that specific product, it’s most definitely not reality
I think you have not used Opus-4.5-thinking?, or perhaps not seriously prompted it?. Beside this being my REPEATED experience this month, many other people post the same thing, so do Anthropic - this isn’t my opinion. It is established fact, and widespread group experience now.
There’s a simple rule in life - when someone steers conversation away from the topic, and starts attacking something else (the person posting, or the motivation, or the like), that’s a 99% certain indicator that they already know deep-down that what they’re saying isn’t, or might not, be true, and is clutching at straws to help self-justify their own uncertainty. Psycology-101. “This must be marketing…” - on a free and open-source set of tools? - that tripped my “he doesn’t really know” alarm bells!! (I am the author of those tools, FYI).
Before you smash reply in fury - here’s what you need to try:
- Pick something hard.
- Tell opus-4.5-thinking enough about what you’re wanting to totally solve it.
- Give it the tools to test that it has totally solved it
- submit.
Step 2 should take 30mins or more - if it’s less - you’re not prompting it properly or enough. You need to treat it exactly like a human expert contractor: tell it everything it needs to know
P.S. I’ve used 1000M (1G) opuse-4.5-thinking tokens in the last 30 days - it has literally done what you said it cannot on several different monster projects.
The reason is that current day LLMs do not have the reasoning capability any longer therefore they will make mistakes you have to review. You cannot leave them unattended or even use test loops. It will never solve those problems.
I know because im dealuing with a complex brownfield project, ive tried for 6 weeks to solve a specfic issue and I’ve used MANY models, and many testing tricks etc. etc.
It cannot do it.
Perhaps Gemini 2.5 Pro the first ever model version WAS able to do it.
You apply your own experience and think that this is a general template for ALL projects on the globe, then apply some kind of consensus idea that is must be true what you are saying.
Perhaps you should ask the Windows team how their updates are going with that AI assisted coding…
Not my experience, on at least half a dozen monster unrelated projects in the last few weeks.
If your experience is different to mine, you might have a problem with your prompts and resources you’re giving to the model.
I have over 6000 hours experience so far - difficult to map out the entirety of my work environments, but like I said, I’m not the only one seeing production-ready solutions.
Maybe tell it to start over without whatever legacy mess it’s trying to undo? These are context-sensitive machines - if you force it to look at garbage, it’s going to sway towards the garbage district in its operations.
You have no idea what im doing, again you are applying your worldview to everyone.
As said, Microsoft sudden quality drop with regards to their updates is no coincidence.
You are elbows deep in your belief system but the reality is differnet and it is demonstrably different.
Great that it works for your use cases, however there are still firm realities and AI is not good enough to take over in those situations
LOL. are you reading my words? You’ve literally got that backwards.
I’m not discussing any “belief” system, I’m telling you what I’m actually experiencing. If Microsoft has problem (and how do you even know?) they probably stem from older, non-anthropic models - don’t project other people’s alleged past failures on current realities.
If you want to believe that agents aren’t as good at they really are - go right ahead - but fighting against reality in public isn’t helping you do your job better: know that other people are having more success than you are, and try to work out why that is: screaming “no no no” helps nobody. If you want to be constructive and tell us what’s not working for you, then people who ARE having success can point out where you’re going wrong.
Totally agree. I’ve always envisioned flipping the dynamic, where like the LLM watches me code and prompts me instead.
Like a real programming session: “Hey, this loop might be redundant,” “consider applying X pattern here,” “your SQL query won’t return what you think it does.”
You really have a lot of growing up to do, i wish good luck in your endeavors
I’m trying to help you. You’re being rude. Who needs to grow up?
Hey folks,
Let’s dial it back a notch.
People are going to have different experiences with AI tooling depending on their codebase, workflow, language, and a dozen other factors.
If something’s working great for you, share what you’re doing so others can try it. That’s valuable and relevant to this forum. The meta-argument about who’s more right is not.
Let’s keep it productive.
I have a better understanding of coding, even though I don’t code. I know better now what to ask of AI. My experience comes from working with the “Playmaker” asset. But being honest its only coming together lately.
In so many ways, I do have a lot of growing up to do. Even at 60 years old.
I have always had the dream of creating a new game, but the market is full of games. I got inspired by a Unity product and created a totally new game, yes its based on classic designs, but the concept and play are new. This represents my first ever published game with the use of AI. I have found that using Cursor helped me in so many ways during the development of this game. I wanted to share that experience.