STUPID DUMB SOFTWARE AND AI. If anyone thinks AI is taking over anytime soon, they are deluded. Even admits it.
Might want to restart, after a while context get so big they forget how to work. Compressing a context already too compressed lead to hilarity after a long time.
Hey, yeah, @Lance_Patchwork absolutely right. As the context window fills up, the model can start hallucinating. It’s best to start a new chat, ideally one per task.
You’re the one having a conversation with the “stupid dumb AI” lmao. AI is obviously taking over the tech industry regardless if you are able to successfully figure out how to use it. The skill of using AI will be as necessary as the skill of making google searches for troubleshooting. If you couldn’t use google to find solutions to issues, no one would hire you, that is how AI skills will be. You’re basically some boomer saying “google is stupid dumb” 20 years ago.
why did you @madbit1 make the model act like your slave that you locked in your basement by having it address you as “master”? lol so weird.
The selected model is not specified
AI in its current state is overhyped. It’s narrow, limited, and still requires constant prompting and baby sitting. If it were truly “intelligent,” I wouldn’t need to sit here feeding it input—it would anticipate, adapt, and improve on its own.
Many projects fail, and AI is no different. Just because the potential is there doesn’t mean the present reality matches the hype. I remember when Google was new—the core hasn’t changed, it’s still just a search engine. AI will likely be the same: a tool that does what it’s programmed to do, nothing more.
The issue isn’t the future—it’s what it can do right now, and that’s underwhelming. If progress feels stagnant, it’s fair to say so. Unless AI proves it can genuinely operate beyond user input, then right now it’s just another piece of tech with inflated expectations.
Who knows, i am running on Auto as it eats through cash. mostly making mistakes. costing money. So I wont pay for any more subscriptions until this software improves. But was previously using Claude sonnet 4 and o3 which worked ok, but last few updates have made things worse, so went back to auto, and unsubscribed. I will pay more and was on pro and ultra previously. i am more than happy to pay for ultra going forward, but like i said, i am not rich and cant sustain this until; at least one of my projects actually gets completed. The project i am working on now, will fund the subscriptions going forward. But i just cant get it to stop making things worse after some good progress, i just don’t understand why is comes in waves.
No, that was merely a test—it’s not weird at all. It actually proved a point: it can somehow remember to call me “master.” Just to clarify, this was a joke—if you thought it was real, that would be the weird part.
What’s interesting, though, is that it seems to remember my name as “master,” yet forgets every piece of code or process I ask it about a day or two later. Despite updates, crashes, and everything else, it has consistently continued calling me “master.” Work that out and tell me why.
Of course you told it to call you “Master”. I pointed that out because it shows you are clearly a novice with AI, since you are just playing with it on a conversational level instead of treating it as a tool.
Having a rule where it calls you master is obviously going to work. All the rule does is add the message before each prompt. So every prompt you are saying, “call me master”. It does not “forget” I assume, but rather does not relearn it from the context provided.
Do you have experience making your project without the help of AI? The primary issue people have with AI is they want it to read their mind because they don’t have the experience to dictate what they actually want, which is a terrible suggestion since I want it to follow my instructions not assume too much. People with experience working on what they are specifically building know how to word requests. “Babysitting” is a strong word for simply verifying the code and tweaking it to create the outcomes you want. If “Babysitting” is too much, then just hire a developer who can “babysit” themselves but they will be like 100x what cursor costs.
I’m curious…is this a matter of context “filling up”…or more an issue of “too much context”, or even “context that is too compressed”?
When it comes to summarization, this is effectively a form of compression, and really LOSSY compression. I have noticed I encounter more issues more recently, with the summarization that, I think, first appeared in 1.5?
But I’m also wondering if it is a “context" full” vs. “too much context” issue… I have had some complex problems, that required a fair amount of explanation, concepts, theories, some sample data, preliminary code, and documentation and even some textbook math stuff. I tried to break off and start a new chat, after the first one had been summarized a few times, and this proved impossible. Every new chat I started, was just WILDLY OUT of context. Trying to supply some of it, was never enough, and over time, I realized that it wasn’t just the original context I supplied, but also the refinements to the algorithms and math and some tweaks to original theories, that the original chat had, that none of the other chats could access, and I certainly couldn’t replicate it.
In the long run, I ended up going back to the original chat, end ultimately had it going FOR DAYS (almost a full week.) It was the only way to work through some of the very complex issues. It worked, although there were some bouts where the models became confused and did not do well. I was able to guide them back on track, but it was one of those situations where I wondered if I could have done it faster myself. IT was a LOT of work, so I honestly don’t know, but still it makes me wonder.
If it is a matter of “context full”, then that is very interesting to me, as it makes me wonder if a 1M context window, might in fact have been helpful for that particular problem space. In fact, it is a problem space Ill be returning to very soon, and probably again and again and again over time here as I refine and refactor and redesign things (its social media stuff powered by a lot of AI tech, constantly changing, always new things to explore and integrate, etc.) If it is a “context full” and not a “too much context” problem, at least.
If it IS a “too much context” problem….well, I am curious….do you guys have any recommendations for handling complex tasks, that in and of themselves don’t really involve a ton of code (IIRC, the code I was developing was about 400 lines of SQL and maybe 250 lines of actual code…the problem was not code, it was the underlying algorithm problems), but in fact just require a lot of context? These problems are just context heavy. Documentation, references to theoretical and scientific theories, etc. I think the chat was summarizing before I even fully supplied all the necessary context, and while the initial passes produced good code to start with, I do wonder what I actually LOST in the initial summarization pass, and how that may have affected some of the nuanced details (there were a number of things I had to fix on a detail level that were somewhat significant to the final outcomes.)
If a 1M context is better than a 200K context, for such things, then that would be good to know. And of course, it would make having 1M context model options useful, at least on such occasions. Most of the time I don’t need tons of context, and a lot of my work is just fine with a 200-300K context window.
Wow, you really went full TED Talk on me there — all because I had a laugh with an AI. Must be exhausting carrying around that much wisdom about “how to word requests.” Honestly, I’m impressed you managed to squeeze in this masterclass between babysitting your own ego.
You talk like you’ve discovered some secret skill the rest of us mortals can’t grasp, but all you’ve really done is figure out how to nag a chatbot without it ignoring you. Congrats, mate — truly elite developer vibes.
But hey, if I ever need someone to state the obvious in a condescending tone and dress it up like insider knowledge, you’ll be first on my list.
I get what you’re saying — the summarization really does act like lossy compression, and you lose some of the nuance that’s built up over time. I’ve also found that starting a new chat can’t replicate the refinements from the original one, so I get why you stuck with a single thread. A bigger context window would probably help in those situations, since the problem isn’t the amount of code, but the amount of interconnected detail you need the model to retain.
Yeah. We do now have a sonnet 1mtok option. I may give it a try and see how it goes the next time I need something like that.
I just upgraded and paid for the 1M context window, so I’m curious to see if it actually improves handling these long, complex chats. Hopefully it keeps all the nuances and refinements intact this time — I’ll report back once I’ve tested it on a real workflow.
Honestly, there are no winners here for me financially. I paid for this so-called upgrade, expecting the 1M context window to actually improve things, and instead it’s just making everything worse. The code is throwing more errors than ever, nuances are getting lost, and I’m spending more time untangling problems than actually getting work done. Feels like I’m throwing money at a system that’s only creating headaches, and I’m seriously questioning why I even bothered. I expected better, and this is a total waste.
That said, I’ll keep trying until the balance runs out — even if it’s just to see whether it can eventually live up to the hype, though right now it feels like a total scam.
This is a forum. We’re here to discuss. I asked several questions that you simply read over, so its clear you aren’t here to discuss but instead poke fun and show how annoyed you are with a tool that obviously many people on here have figured out how to use.
Nothing in my post was ground breaking. I asked about your previous experience/background with the project you are working on. You mentioned “babysitting”, as if that is a bad thing, when there is always some level of babysitting when it comes to these models. If you let them run on long prompts and don’t verify the changes or progress, things can get messy real quick. That is why I asked about your experience, because your complaint seemed very vague like you were just getting bad results and not sure specifically what it was doing wrong. It’s hard to understand what is wrong if it can only be articulated as “the code is throwing more errors than ever” instead “it keeps making this specific mistake”.
There are tons of people on here who are willing to help people who are having trouble, but as I correctly interpreted from your opening post, you are here to just vent. Using words like like “STUPID DUMB SOFTWARE” was pretty telling. And yea, I do have more wisdom when it comes to using Cursor… I am not the one complaining that I can’t get it to work and having the model cursing and calling itself stupid.
Cursor is one of the best tools out there. It should be able to provide some value.
I want to emphasize that I genuinely loved this tool—it was my go-to for complex workflows and projects. However, over time, its performance has noticeably deteriorated, and I’m not alone in feeling this way.
A general consensus among users is that recent updates have introduced inconsistencies, particularly with long, complex chats. Many report that the tool struggles to retain context, leading to repeated errors, loss of nuance, or having to constantly guide it back on track. Even when paying for upgrades like extended context windows, some users feel that the improvements haven’t fully addressed these issues.
Regarding your comment about “babysitting” the models and verifying changes, I get that every complex AI workflow needs some oversight. My frustration isn’t with that — it’s that the tool used to be intelligent enough to work with minimal babysitting. Now, even with careful guidance, I’m seeing more errors and lost nuance than ever before. This isn’t simply about checking the outputs; it’s about the platform not retaining or understanding the context the way it used to.
While the platform still has great potential and remains useful for simpler tasks, it now often feels like an amateur programmer’s assistant rather than the sophisticated tool it once was. There’s a real need for serious work or a return to the drawing board to recapture what made it so effective in the first place.
I’m continuing to push the tool and explore its limits, but these issues make it harder to rely on for complex, ongoing work.
I’ve seen you’ve been around since March at least, so you have seen quite a bit of changes. I don’t doubt that Cursor has made some big changes, and that performance has suffered, esp from where it was on the old pricing plan. Have you found alternatives that compare to what Cursor used to be?
I hear you about AI skills and other platforms, but that’s not the point here. I’ve used many other tools, and in my experience, this one has consistently been better. That said, Cursor changes so often, and not always for the better, which is exactly the issue I’m pointing out. I haven’t tested some of the alternatives in a while, so I can’t speak to whether they’ve improved, but my focus here is on real limitations I’m experiencing on this tool so others can learn from them — it’s about practical performance, not debating which platform might be “better.”
If your goal is to contribute constructively, great. If not, injecting sarcasm, side commentary, or platform comparisons doesn’t add value — it only derails the discussion. Let’s keep the focus on the real issues with the tool itself.
