The changing economics of software engineering

Hey all, I’m not trying to promote myself—I’m just someone who feels like they’re onto something but has nobody to discuss it with seriously. I write essays exploring AI through an engineer’s lens, and this one examines how the economics of software development may be fundamentally shifting.

The core thesis: as AI agents make code production increasingly cheap, technical debt transforms from “messy code we’ll fix later” into a critical constraint on our ability to instruct agents effectively. The question isn’t “how fast can we write code?” but “how comprehensible is our system to both humans and machines?”

I explore two interconnected ideas:

Natural language as a programming interface: When documentation can become execution, the trade-off between velocity and quality fundamentally changes. But this only works if we architect systems with agent-navigability in mind—choosing constrained ecosystems, establishing clear patterns, and connecting flexible natural language instructions to rigid programmatic validation.

The new economics of simplification: If code production approaches zero cost while debugging remains expensive, then simplification work doubles in value. Reducing complexity now reduces bugs for both human and AI engineers. Technical debt becomes measured in navigability—can anyone (human or agent) quickly understand your codebase’s patterns?

This isn’t a call for drastic changes. It’s a call to reconsider: as the cost of writing code approaches zero, what becomes valuable? I argue it’s the ability to think clearly about problems, architect comprehensible systems, and translate business needs into specifications that agents can reliably execute.

The implications are profound—not just for how we build software, but for how we think about the profession itself. I’d love to hear perspectives from others grappling with these questions.

Programmable Engineers.pdf (180.1 KB)

1 Like

There was a report recently that even though AI is allowing developers to write more code, most of that new code has been refactoring work. Many people took that in a negative light to mean AI seems to be making people more productive when it actually isn’t. But rather, I think it backs up your thesis. Especially in this transitionary period.

1 Like

Interesting! I’d love to see that source if you can find it. Im sure there’s a lot of both going on at the moment. My coworkers are finally getting the hang of code generation with an agent. Getting it correct the first time, using clever prompt engineering tricks to guide the agent around the codebase and have it find the right patterns. Without those tricks and the practice of getting the context just right, these agent produce varying levels of both logical and literally slop.

While we’ve obviously been using it for refactoring I’ve frequently wondered if there’s a way I can get it to think about reduction and simplification more. Get it to explore more, train the agent to discover patterns.

The problem then becomes - as I’m sure anybody working in a large typescript monorepo will arrest - how do we get it to find the right patterns. This is why I’ve tried to lean more into thinking about how to programmatically manage context. Either by tooling restrictions, or clever ways to easily inject the right context

1 Like
  1. To avoid writing bad code and to be able to complete a task at all, the AI ​​must be sufficiently intelligent. Just try giving a complex engineering task (even if it can be done quickly) to Grok Code Fast and GPT-5.2 XHigh.
  2. For both humans and AI to understand your project, it’s enough to adhere to classic/basic code culture: DRY, SRP, YAGNI; my CI also includes a warning if a file accumulates more than 1500 lines of executable code. Also, periodically analyze the code by answering the question: “Do we need refactoring?”.
  3. “Clever tricks” are also unnecessary – it’s enough to be able to write in Markdown format and clearly express your thoughts. HOWEVER, you need to know different methods that SWEs can use to achieve results (for example, TDD).

See also:

1 Like

I think this responses kind of misses my point entirely. I wasn’t suggesting that the current approach of context engineering is wrong, or doesn’t work. Rather, as the essay says, it’s simply not the whole picture.

I’m really good at getting the high quality code gen results I want. But it’s always missing that last 10%, you know what I mean?

Our models are sufficiently smart already, as you point out. And we have some paradigms in places like DRY, KISS and TDD ect… that help mitigate the noise on the tail end of code gen. But in fast moving startups with constant features requests and strict deadlines the customers typically come first, and usually at the cost of tech debt.

As you know this tech debt accumulates and patterns change over time as the product and team grow. But if the team can grow infinity, should our priorities change? Should we keep pumping features out at breakneck paces, or should we take some honest time to think about how we might reorganize or adjust the architecture the software and the practices of the firm to potentially have an even faster feature cadence down the line?

Furthermore I think I fundamentally disagree with you on something. Your claim the writing markdown instructions is sufficient. I agree that it is sufficient in theory for providing instructions to the agent, we have gotten some really phenomenal progress as an industry with the tools provided to us as the customers. Things like rules, memories, and skills are insanely powerful. But as the essay attempts to point out, those or pieces of guidance within the entropy system. They don’t address the other half of the problem - our software is messy and overcomplicated.

My theory is that handling said tech debt now has stronger financial incentives as a business. I think we should all pause for a minute and just think about that statement. Is context engineering with markdown simply adding complexity? Now we have .Claude and .cursor directories in our projects containing instructions and stuff. Would those be needed if our approach to the business problem was simpler?

1 Like

I understand that this is partly a metaphor, but I literally have a micro-project created through Cursor that doesn’t have a .cursor folder.

I haven’t read the essay itself, but if I understand your points correctly, Microsoft already wants to hire an engineer who will churn out a million lines of Rust code per month using AI to reduce technical debt in Windows. By the way, I’m skeptical about this, because theoretically it’s possible to generate that much code, but the verification and testing will take much longer.

Overall, yes, we now have a tool that allows us to process technical debt faster. The question is, since development has accelerated overall, managers might simply want to receive the final product even faster, rather than allocating new free time to refinement.

1 Like

this is excellent by the way! I’ve taken to cleaning up a lot of my small personal project as well. I love this mentality.

I’m skeptical about this, because theoretically it’s possible to generate that much code, but the verification and testing will take much longer.

Exactly. its not entirely about code gen. its about rethinking our approach to problems - in general - with this new factor included.

The question is, since development has accelerated overall, managers might simply want to receive the final product even faster, rather than allocating new free time to refinement.

Right, this is what I’m struggling with at the moment. Personally, I believe it would pay dividends to slowly and progressively clean up unneeded abstractions and overlapping patterns. I also think it would be beneficial at MY company, to try and help our management team understand this new approach to building a product.

In the end I think both are needed. Context engineering to guide agents within the codebase, and cleaner, simpler approaches to the problems and practices of SWE. Would you agree?
What kinds of practices have you and your team invented or invested time into adopting that you feel is worth the effort and provides a good balance between simplicity and agent dynamics such as context management?

We’ve found great success with our internal skills system (still have to adopt the new standard from claude and cursor). We also have had a huge mental overhead reduced by programmatically connecting our project management context (linear) to the agent chat via slash commands (not MCP), as well as restructuring the Git branching patterns of the business to be driven by linear. Likewise, we’ve tried to shy away from MCP usage, as it pollutes context and ultimately does not accomplish business tasks in a novel way. Instead of using linear MCP, we used the linear SDK and provided the agent with skills and slash commands to help the engineer manage context as needed. I’ve found this to be strikingly effective when managing multiple agents at once.

1 Like

I’m not sure if my approaches work at the scale of teams and companies, as I work alone and don’t yet have any products on the market.

I had prompt engineering skills before getting into agent coding, but my growth as a specialist is most likely mainly due to gaining experience as a SWE (Software Engineer) and system architect. Because code can be written very quickly and iterated very quickly, you can very quickly gain experience in SOLVING PROBLEMS, rather than experience in WRITING CODE.

Now I’m trying to make my first indie game, because I’ve been dreaming about it for a long time, but I didn’t have enough time to learn CODING. And I’m optimizing the entire development process for creating the project using neural networks. I’m using Rust because it has a strict compiler with excellent feedback, as well as a fantastic optional ecosystem around it.


One of the most important parts of the project is the local CI system – for checking and configuring the environment, linting, compiling, running tests, generating reports, and preparing for manual testing. The general concept can be seen in Agent Enforcer.


For example, would any manager invest in such a system, which is needed for targeted test coverage generation?


One vlogger said: “You stop being a vibe coder when you start checking the output of the LLM (when you start working like an engineer).”

But again: look at Cursor. Literally every subsequent version, from 1.5 up to 2.2, was worse than the previous one from a technical point of view. That meant roughly five months of degraded experience for users. And Cursor team have unlimited access to Cursor and all the cutting-edge LLMs – yet their own IDE works worse than my homebrew projects.

And I don’t know why they SUDDENLY decided to stop dumping crap and started to more actively deal with technical debt starting with version 2.3. Did they start losing money or did they just decide to make their product better?