Tools are created to adapt to different people who have different usages. Your ‘rules’ reflect your workflow, not universal limits. Complex refactors often require multiple iterative requests. Dismissing codebase-wide queries as ‘for nothing’ presumes everyone’s work fits your narrow scope.
IMHO, productivity isn’t measured by request counts, but quality outcomes - which sometimes demand rapid experimentation. BTW, rapid experimentation (high request counts) is how complex systems get debugged or refined—it’s the cost of working on non-trivial tasks.
If cursor adds Deepseek, I would prefer they do it with their own compute vs. using Deepseek APIs so that we’re not shipping all of our code to overseas companies
American companies are also “overseas” companies for billions of people around the world.
I think users should have options to choose from: some would not mind “sharing” their code (non-TOP secret, no privacy / intellectual property concerns); others would prefer the model to be run locally, like running DeepSeek’s distilled models on their own machines (via ollama, or whatever)
I’m interesting in what parameter model of deepseek-r1 Cursor is deploying for it’s Firework hosted API. I’m locally running the 32B version in ollama and seeing slightly better results in cursor, but even better results in chat.deepseek.com.
Personally, I don’t care about the costs at all. I regularly test open-source alternatives, and Cursor simply delivers the best results for me. Last month, I made over 2000 fast requests and paid $100. I would have been happy to pay even more because it replaced a programmer who cost me $6000 a month (and was lazy). I just want the best software with the best results.
I’ve noticed that the deepseek-r1 responses in cursor timeout after 3mins and many of the queries I’ve run are taking much longer than that. Testing on chat.deepseek.com some queries are taking up to 6 or 7mins. Can this limit please be increased?
Understand you are giving your code away to to an innovative company that’s still under a governement excited to leverage your IP (as evident by IP theft at scale in the manufacturing world). Whether that’s through direct RL or beyond. Not scaremongering, just understand the implications of using an API that uses this latest gen class AI
Wow what a post. Loved the college math story… the students / people / employees / workers / teachers / [profession of your choice] / founders etc that harness these new capabilities will invent the future… and FAST.
What’s fascinating to me is that each of us learns, understands and works differently. It’s like everyone is discovering / iterating on how AI works for them.
The fuzzy side, specifically with prompting, is a history shattering invention in itself. Being able to expound in stream of consciousness, especially when dictating words and ideas out of order… draws computing a major step closer TO US… to the strange way our minds work … vs forcing us to conform to THEM, which is the history of computing.
Who wants to toil trying to solve problems expressed in a way only an esoteric shaman can understand and assist? ( that’s the vantage point of an average, intelligent person who is frustrated trying to learn to code )
being able to drop context is the next big leap… it’s deceptively easy to equate Google with Chat GPT… they both have a simple box… with a completely different capability universe behind them…
the aha moment comes when each of us discovers how it can unblock us in That-Very-Thing-We-Are-Currently-Stuck-On™
Thanks for introducing me to Lean 4… never heard of it , will pass along to interested mathmeticians…
Power to you! Fly forward and realize your dreams!