"Vibe Coding" is a stupid name

I was a very early entrant to AI-Assisted Coding and had erratic, extremely limited coding experience along with far more limited coding talent. But I feel I have had a lot of success. I never liked the term vibe coding because it implies a passive process. At the same time, I saw the expert coders mocking AI assisted coding and couldn’t help but think it is a skill issue. Coming at it as experienced coders vs someone who is not means a certain viewpoint and approach. They might fix something manually since they know how and it is faster. Those without those skills were forced to find other ways to manage the problems, which we did with prompt engineering, strict type safety, rules, RAGs, context engineering, MCP servers, and so on. But, most importantly, the key is project management. We all know AI will lie like a 6 year old that stole your cookies. We know it makes mistakes, forgets things, leaves loose ends, etc. If you don’t stay on top of every detail, you are unlikely to produce anything but the legendary “slop.”

Almost every day, AI-assisted coding gets easier and better. Lazy loading for MCP is the new big hot improvement and they aren’t going to stop. The process of development is speeding up, making project management even more challenging in some ways. There are less errors to fix. Opus 4.5 makes a lot less mistakes than Sonnet 3.5 did. But you have more documentation to update, more projects, more dependencies, and so on. Everything is moving faster. We are training our replacements very well and very fast, methinks. It’s fun but then we are all doomed. Enjoy it while it lasts. My last two cents: No one talks about how much you can learn by watching the agent do things. Vibe coding is a dumb term. :slight_smile:

8 Likes

Hey, thanks for the detailed take. I get your point, the term really does sound too lightweight for something that actually needs serious project management and a structured approach.

Your notes about the importance of prompt engineering, strict typing, rules, MCP, and controlling every step of the agent are exactly what separates production code from “slop.” And you’re right, this isn’t a passive process, it’s active tool management.

On learning by watching the agent, I agree, that part is underrated. You start seeing patterns, decisions, and approaches you might not have picked yourself.

2 Likes

I don’t call myself a vibe coder. I’ve been in software engineering for near 30 years now, programming for almost 35. I have been using the term “agentic coding” or “agent assisted coding”, as that is generally what it is for me: The agent assists me in getting my daily work done. I delegate a lot to the agent, however a lot of what I think is embodied in the “cuture” of “vibe coding” involves…..well, let me put it this way: less (far less) rigor than I generally try to instill in my work, assisted by an agent or not.

So I don’t really consider myself a “vibe coder”…although there are times when I may hand off more to the agent and let it do its thing without as much checks and balances (which are my “vibe moments”)…I consider myself an agentic coder. I use the agent, often quite heavily, but I also try to maintain a certain degree of rigor wrapped around it all, to try and maintain a high degree of quality in the results.

While the agents and agentic tooling do continually increase in capability and features…FWIW, in all honesty, I don’t think I’ve been entirely satisfied with ANY work any agent with any model has yet done in any of my work. My standards are still very high, and no agent has as of yet, actually met my own personal bar. Code was never just the result of my work in the past…it was my life, it was my art. I always tried to CRAFT clean, elegant, maintainable code with great longevity, that solved the business problems while also being accessible to even the more junior developer, and which stood the test of time. Agentic coding…is still something I’m getting used to here. Plain and simple: Agentic code does not meet the bar, yet. That is always a tough pill to swallow for me. At the same time, given I am working for a startup…we have as a small team of just a handful of devs, developed a rather astonishing amount of work in a very short amount of time, and that continually blows my mind. The agent hasn’t met the bar on quality yet, but, it has surpassed the bar on time to market for what we are now calling “the 80% product”, and the agent does that very well.

In any case, agentic coding. I’m an agent-powered software engineer, rather than a vibe coder.

2 Likes

Whatever you call it, doesn’t have to be vibe coding, it needs a name that isn’t software engineering.

I use Agents quite a lot, sometimes for large amounts of code, and I can tell you, absolutely for sure, that if you can’t fluently read the code they produce, you are creating applications with security issues, in addition to mountains of other tech debt.

I know this is probably the wrong forum to say this, but it’s just true. I know it’s true because I’ve been building applications for decades, I can read the code fluently. In addition I’ve been using coding agents for quite a while. I’m as much an expert in AI assisted coding as anyone is at this early stage. I have dialed in initial context, skills, commands, hooks and all manner of strategies to keep them on the rails. As a result I can offload a shocking amount of work to them.

But no matter how dialed in I get it, no matter what kind of scaffolding I put up, no matter how many pre and post implementation passes I run the code through, using multiple frontier models, I still find bugs, security problems, maintainability problems, DRY issues, conflicts, redundancy, bad patterns and latent issues just waiting for the right circumstances to arise to turn them into full blown deal breakers.

This is the reality of coding agents. In the future it may change, but today with SOTA models like Opus 4.5, Codex 5.2 and Gemini 3, it’s the only reality.

The problem is, if you can’t read the code you have no way of knowing this. And even if you did know it, without a perfect model that exists in a theoretical future, you wouldn’t be able to fix it.

That’s fine if you’re creating hobby projects for personal use. Go crazy. But if you’re creating things that will be used by other humans, involving any kind of PII or other sensitive data, you are being deeply irresponsible with those people’s information and opening yourself up to very real legal liability.

In case it’s not obvious, I’m very pro agent assisted coding. I’m ok with vibe coding too, in the right context. I love the idea of people learning to code with the help of agents.

But the reason the term vibe coding exists, and has the stigma associated with it that you dislike, is that you absolutely cannot create responsible, production ready code if you never read the code. Or if you read it but don’t completely understand it. Or if you only spot check. You just can’t. It feels like you can, sometimes I get sucked in too, thinking I’ve finally figured out how to keep the agents within the lines, but then I review the code and realize I was dangerously lazy thinking that maybe I didn’t need to.

So there needs to be a term that distinguishes actual software development from the fantasy that agents have arrived at the level where you can just let them code and trust the results. Look at the ungodly mess that was Cursor’s recent autonomous agent swarm browser tech demo. It was a cool proof of concept, it accomplished the goal of creating buzz. But what it proved most of all is that you don’t get production software out of autonomous agents, you don’t get anything even in the same ballpark.

What you’re framing as “expert” coders mocking vibe coding is probably partially the loud, but shrinking, group of developers who are irrationally against AI coding. But it’s also partially engineers who can just read and understand code. That’s all it takes, you don’t even need to be an expert, to see that vibe coding and software engineering are not the same thing.

I like “Vibe Engineering” much more – which is mix vibing and engineering (dev skillset)

Oh, I don’t know. Docker certified my sqlite mcp server as secure and added it to their Docker Catalog. The sqlite mcp server put out by Anthropic had major sql injection flaws, which I fixed. They never fixed them. I also see major software companies, libraries, packages, etc., with major vulnerabilities pretty often, some that last for weeks before being fixed. I always do audits for performance and security before releasing and I use trivvy, secrets checking, codeql, and docker scout.

100% agree. The models have improved a LOT but still require oversight from a human. Ideally one who is an expert in the field.

Sadly whether it is called “vibe coding” or something else, this problem of mocking will still persist. Some of it is just people being elitist jerks. Some of it is developers who are afraid of losing their jobs. Some of it is fair criticism because at the current time their is just no getting around the need/value of a true human expert.

But I think we all have to accept that at some point in the future AI agents will be able to write and assess code better than human experts. I don’t think that point is very far away either.

I think THIS is indeed the most important key to succeeding as a software developer in our AI-driven future. Simply being a code monkey is no longer enough, developers need to be able to manage and direct a team of AI agents to get their work done.

People will mock anything and everything. It means nothing. Most people are group-thinkers because it almost always pays to be “normal.” It seems to me dealing with security is no different than dealing with any other aspect of coding. The best agents can spot security problems, if you just dedicate a pass or passes to it. It’s extra steps is all. Ad you can use tools. But, again, I point out there are plenty of issues affecting hand-coders also. The models are successfully being used for hacking now so it seems perfectly logical they can also defend. Ultimately, reading 100,000 lines of code via hand to run a security audit is going to become impractical, if it isn’t already. But the time they are done, the AI assisted coders will be five versions ahead.

1 Like

The mocking is because the vast majority of vibe coded software is riddled with bugs, major security flaws, and other fundamental issues that get pushed to prod without effective oversight.

I don’t generally like the overall quality of the code the models generate today. That said, my main focus in reviews is: Security, security, security, correctness, then other things. The code needs to be secure and needs to perform the correct functionality. Even if it doesn’t look particularly pretty. I like pretty code, but the simple reality is, when you HAVE to move fast, because of significant budget constraints, significant competitive forces, pretty code is just not important. Getting useful functionality to production, and making sure its secure when it gets there, is the most important thing in such an environment.

Using an LLM and agent to accelerate, doesn’t mean prettier code, or more elegant solutions to problems, or any of the things that I think seasoned software engineers/developers enjoy seeing and working with. In fact, the code is usually not very elegant and not pretty, and I don’t think that will ever not rub a good software engineer the wrong way. However, there IS a solid time to production gain when using an LLM, and for a seasoned developer who knows what to look for in terms of security and correctness, they can indeed accelerate.

I think this is something that many of the mocking seasoned developers miss…as long as the code IS secure and correct, and I guess on a tertiary level is not costing you an undue amount of time and money to maintain (and if an agent is primarily handling that maintenance, its pretty hard to bust this bubble!), well…its the agent that is actually having to really DEAL with the code day in and day out, not you. You review it, and it may be abrasive, but as long as you can review it effectively and make sure it meets security and correctness bars, and then any other bars that are essential (which may vary over time or for a given feature, etc….sometimes maybe there are critical performance requirements), the time and cost savings getting things to market on a blazing fast schedule usually outweigh the LOSSES of using an LLM and agent.

All that said, I do understand the mocking. I think it is less common for seasoned software engineers to be doing any kind of agentic software development, than less skilled or even non-developers. FWIW, I think there has been a tremendous amount of poor quality code shipped to production in the last few years, and sadly, there are many seasoned senior software developers who have had to deal with the fallout. Their mocking is not necessarily unjustified, given the cesspool of bad code many are having to deal with. It is an understandable reaction, to…well, I guess you could say inevitable outcome once LLMs came onto the scene and were trained to understand code…

1 Like

Like I said, that’s a skill issue, eh? People who produce slop aren’t managing the project properly. I know I’ve made mistakes. You learn and adapt. The question is which methodology is more efficient with comparable skillsets using each. Whatever the answer is at the moment doesn’t really matter much on a systemic level because the process is improving extremely rapidly. By the time you did a study on it, it would be irrelevant. Also, as far as the code being pretty, maybe it doesn’t matter as much anymore. We are moving toward the agents handling all coding and they will be able to read it just fine regardless. This changes a lot of rules over time, specially as context windows increase and failure rates decline. Modularity becomes less critical, as another example. My point is maintainability is in flux and has to be balanced with speed of development. My issue with the mocking isn’t personal offense. I view it as a lack of vision and self-denial. AI is obviously going to put them all out of work. It’s going to put almost everyone out of work, combined with robots. Prices will plummet with reduced cost of labor and thus production but how will people pay the bills? It’s a huge social experiment with no clear answers. I feel their angst and have nothing but sympathy. I expect wars to be fought over data centers, cults, terrorism, etc.

I disagree about agents being able to “read” code just fine regardless. Early on, when I was first starting out with Cursor, the code first started out good, clean (not great, but it wasn’t terrible.) Over time as greenfield became brownfield and existing code needed improvements, the code became much worse. The worse the code became, the harder the agent seemed to have dealing with it. So at one point, I spent time crating rules to enforce certain architectural and software design principles, policies, patterns, etc. Had the agent rework code, segregate and separate according to concerns and responsibilities, layer things out, create tiers of functionality (i.e. api tier separate from domain tier, each with their own layers). The agent has had a heck of a lot better time developing and mantaining this code, than when it was a hodgepodge mess of spaghetti and mud.

I still think code quality matters. The one point I do agree on is, it doesn’t matter as much as it did, when it was only and primarily humans working within a code base. The agent can still get as lost in a pile of poo code as a human can, IME! However, as long as the code is reasonably well structured, the agent seems to do a lot better than the average human if the code isn’t excellent, pure, elegant, super clean and perfectly well styled across every bit of the codebase.

So I aim for code of reasonable quality, I guess. Again, it does not meet my standards as a human, veteran of coding of some 30 years, and who is generally hyperopinionated. But, I am not in the code myself, outside of reviewing it, nearly as often as I was, and we have so many goals and tight deadlines, it is impractical for me to try and keep it pristine anyway…which was the point I was trying to make earlier. The agent has accelerated the dev team I work on, we move fast, deliver fast, and its been pretty reliable and robust. If and when I do end up having to get into the code myself…I guess a certain amount of OCD kicks in and I clean it up. Until that happens though, its reasonable quality, the agent can handle the code well, so it is what it is now.

I’m just saying some of the rules are changing, not saying code quality doesn’t matter. I also use prettier, by the way. Any tool that helps any aspect of the project is wonderful. It sounds like we mostly agree. AI assisted coding is dependent on the quality of the work put in to it. It can range from useless to wonderful depending on how well the project is managed. Every day it gets easier. Eventually my dog will be able to do it.

1 Like

The best agents can spot security problems, if you just dedicate a pass or passes to it. It’s extra steps is all.

This isn’t wrong, agents can definitely spot security problems. The issue is that they can’t spot every security problem.

I recently found a gaping security flaw in a codebase. It was in some agent written code that I had somehow failed to review, that had survived many security audit passes from multiple frontier models. It was the sort of mistake that a human would never make, no matter how inexperienced, and the code itself was technically fine. It wasn’t something that looked like a security issue because it didn’t match common exploit patterns, it was just a mind blowingly bad decision. It was implemented correctly, it didn’t have any vulnerabilities that would allow a bad actor to cause it to function as anything other than a security flaw.

Fortunately the code in question never had the opportunity to live out its purpose and give wide open access to user data to the internet because it was gated behind alpha testing limits. But it was technically in production for weeks. Huge failure on my part that I was lucky didn’t hurt anyone. One more reminder that you can’t trust LLM code.

I could give you a pile of other examples, not just in security but in performance, maintainability, pretty much all aspects of development. LLM agents make mistakes that are invisible to LLM agents.

They’re also amazing tools. But learn to code, not just part way, all the way. Otherwise you’re engineering bridges that will collapse when real people are driving over them, guaranteed.

2 Likes

Totally agree that the quality of what the agent generates is relative to the quality of input. Codebase input. Prompt input (I’ve slacked off lately, its been painful! Need to get back to more crafted prompting!) Context input. Its definitely a factor.

One of the guys I work with, who has been doing AI stuff for longer than anyone on our team (years and years), always says this: “AI is never going to be worse than it is right now!” Which is very true, already just in the last six months, we’ve seen new models come onto the scene, and push the envelope either in terms of speed (Grok Code is truly blazing fast), or quality (Sonnet & Opus 4.5 have been very good to us so far, GPT-5.x has been good to many people, etc.) A year from now, it’ll be an alien landscape again, and a year from that…I suspect I’ll be wondering how we ever wrote code “by hand” or something like that. :stuck_out_tongue:

As I mentioned, Anthropic never fixed the serious sql injection vulnerabilities in their sqlite mcp server. I fixed them in mine. They released other seriously flawed MCP servers as well. And, so have numerous other big companies with tons of experienced coders. Indeed, if one wanted to be provocative, they might say the current security state of the web is abysmal.

https://www.darkreading.com/application-security/microsoft-anthropic-mcp-servers-risk-takeovers

The CEO of Anthropic said, months ago, that some of their teams are generating 90% of their code using Claude, and that company wide half of code was AI generated.

Most likely there was a little exaggeration there, but there’s a high chance the MCP server project used a lot of Claude Code and a higher chance that the vulnerabilities trace back to Claude.

I can’t think of a better example of why you shouldn’t trust AI code.

2 Likes

Naa, those were the first MCP servers they released. They weren’t using AI to code at that point. Besides, security issues are widespread, constant, and often slow to be fixed. And, as I said, I fixed those issues with AI in my sqlite mcp server. The status quo is not good.

100% agreed;

1 Like

I’ll leave the quality of code produced by agents to the others. I’d like to address the Project Management question with a suggestion. Try Agentic Project Management. For large projects it brings order, accountability, and a Q&A that helps you fully define the goal. Give it a try. You’ll thank me later.

2 Likes

“Vibes” is actually not a bad thing, because often the LLM will outthink a human in terms of spec, or how to put an idea into practice, often borrowing from good UX patterns, and I am like “Oh I like that” so for me that is ‘vibes’