tried to write an addon for something.
tried c3.7t several times. no luck.
G2.5 Pro could do it in 2-3 trials.
great.
ps: i am a 0 coder.
i only know procedure programming like basic, C etc in old schools.
tried to write an addon for something.
tried c3.7t several times. no luck.
G2.5 Pro could do it in 2-3 trials.
great.
ps: i am a 0 coder.
i only know procedure programming like basic, C etc in old schools.
Im an average programer.
When using it in existing projects, I often feel that Claude has superior contextual understanding and provides higher quality code suggestions.
This might be because Gemini’s optimization on the cursor side hasn’t been completed yet.
However, since there are problems that Claude couldn’t solve but Gemini can, once Gemini’s optimization is complete, I might choose to use it.
think of it in a class of students,
there are 100 MCQ.
the best student know 90% of the MCQ,
sometimes, the 2nd best know 85% and they may not 100% overlap.
i.e. some MCQ are correct in both, some MCQ are correct only in 1 of them.
then why dont we try to use both? try 1 if fail then use another.
better than rely on only 1.
your analogy actually gives me idea,
If we’re a whale who can burn tons of money, why not use all the flagship models, then we can choose each prompt ofc we will burn a lot of resources, but quality is namba wan.
I completely agree with your opinion.
However, for everyday use, it’s more convenient to have just one superior model.
Moreover, if MoA becomes available on Cursor as well, things would move much faster.
the financial aspect is also important.
cursor is here to help.
for 20USD, you only have chatgpt plus or claude or gemini?
now 20USD for cursor, you got everything.
wtf is MOA?
ok. grok3:
“MOA” can have different meanings depending on the context, and since you referenced a discussion from the Cursor Community Forum, it’s likely related to something in the realm of AI, coding, or technology. Based on the forum snippet and broader context, the most relevant interpretation here seems to be “Mixture of Agents” (MoA),
I just tried gemini for 2hrs. two of the most frustrating hours of my life that i will never get back. it was beyond idiotic.
Yes, MoA stands for Mixture of Agent.
With MoA, multiple models are used to obtain more accurate responses.
This is possible only because Cursor handles multiple models, and I believe it could become a competitive advantage.
However, it requires a lot of money, so it might be difficult under the current pricing structure.
That’s the feeling I have every time I interacted with Gemini.
That it’s just plain dumb. That’s why I’m not very inclined to try or even bother to try the new version.
I only had the feeling of Gemini being a dumb idiot AI that doesn’t understand anything.
Maybe, but you should try the new Gemini 2.5 pro on AI studio, it’s very good (Not in Cursor and Cline)
I tried to write a python script and its just unable to deliver a working error free version.
So yes I think its dumb. I tried to feed it the total python3 documentation zipped but to much tokens sadly.
You may not need any documentation at all as python is one of the core machine learning languages and all coding models are trained on python docu. If you have specifics from docu you need or something very new that you want to use, you can use in Cursor settings Docu where it would index the docu you want.
You can also ask AI it to check python3 docu directly for a feature you need. Or give it even a link in the prompt to the docu page you want.
Any more specific reasons to include full docu?
I’ve been using 2.5 Pro in conjunction with RepoPrompt via the Google AI Studio UI and then Cursor to fix up the linting errors - it took a quite a shabbily installed and poorly configured JWT oAuth skeleton and three shotted it into spec - though I did have some integration tests prepared which helped. Apparently this takes engineers like to a week to do - it me took 30 minutes to configure it. Obviously it could be broken and because I’m such a coding nubile I still don’t realise that a massive problem awaits me down the line - but like, I have a spec - the AI assures me that we’ve met it - I’m writing the unit tests right now… I was pretty impressed 2.5+
i dont know about you guys,
but i got a macro that c3.7t tried and failed several times.
then i switch to G2.5P, and also after several times,
G2.5P did it.
that’s my humble personal experience.
however, i have a python prompt that ask it to have a rotate square with a red ball; that c3.7t could do it on day 1 ;but G2.5P could not do.
may be we have to try out which LLM to do for a particular problem, as i mentioned above.
ps: i am a 0 coder, each time i use cursor + LLM, i pray for the result.
I find either can do solid work, and when one seems to be wedged, I switch to the other. “Two LLMs are better than one.”