DeepSeek v3 is not yet a premium model, but I cannot confirm how the pricing will work on agent mode!
Hey can you able to use the web on agent mode ?
I think it is funny to see people saying that they are “no code” developers. What does that even mean?
A “no-code” developer leverages AI-assisted visual interfaces to create functional software without traditional programming. From my experience, we’re all becoming no-code developers and becoming better at architectural oversight, prompt engineering, […] code is becoming low-level coding even if written in Python, we just check the logic flow goes like we want and press run.
you can add it so that it is spent in agent mode in premium requests, there are no problems, premium requests can be purchased, for example, I pay $ 40 for 1,000 premium requests per month, but I am willing to pay $ 60 if necessary for 1,500 requests. provided that there are more cool models in the agent
Means they use 2-5K requests a month, type all caps and/or swear at composer agent, complain about 2-5 min wait times, and often say “it’s not working” Lol and have a peek-a-choo face.
Now, there might be more types of “no-code” that are more hands off and “guide” the AI, and understand the code written - with far less complaints about wait times and composer agent breaking code…
Any updates on this? It was possible with the API workaround but the API has been down. This is forcing me and many others towards other options when I prefer to just stick with cursor.
Still a work in progress?
It’s not just Deepseek R1! Gemini 2.0 Pro also needs to be integrated into Agent mode together! This is gonna be the sickest combo!
If Gemini 2.0 Pro is integrated into the Agent mode, it’ll be a huge upgrade! Gemini 2.0 Pro outperforms Sonnet 3.5 in many tasks, especially in terms of performance.
I’ve been doing some testing work lately. Sonnet 3.5 kept making mistakes when writing test cases. A task that it couldn’t finish in 4 hours, Gemini 2.0 Pro nailed it in just 20 seconds!
If both Deepseek R1 and Gemini 2.0 Pro can be used in the Agent mode, with Deepseek R1 for task planning and Gemini 2.0 Pro for task execution, both the task success rate and execution efficiency will skyrocket! This could potentially bring Cursor right into the next stage!
I took apart Deepseek R1 and found that its reasoning ability is way better than O1’s. But its drawback is that the ability of its expert model is lousy, lacking systematic training. If you can get rid of its expert model and only keep the reasoning model for reasoning purposes, and then let Gemini 2.0 Pro, which has the strongest execution ability, execute tasks, it’ll open up a whole new gate! The downside of Gemini 2.0 Pro is that its system prompts are really really bad, so figure out a way to fix them, and its ability will increase by another 30% - 50%! It’s off the charts!
Plz gotta test what I said. It all really happened. If a complex project that originally took weeks to solve can now be solved in just 3 - 4 days. It’s absolutely mind-blowing! Plz gotta test what I said. Plz gotta test what I said. Plz gotta test what I said!
We need the agent mode of Deepseek R1 and V3. At the same time, Deepseek recently released a coder model that also needs to be added, Deepseek coder model, and its agent mode
need updates. deepseek’s team build LLMs faster than cursor’s team.
This is really needed. Please ![]()