How I created an enterprise app with Cursor in a few weeks

Optimizing Software Development for AI Collaboration: A New Paradigm

Through extensive experimentation with daily use of cursor, I’ve discovered several principles that have dramatically improved my AI collaboration results, making the AI far more self-reliant than before, resulting in incredible productivity gains, helping me made an accounting and inventory system that otherwise would take a team to create.

1. Recontextualizing Documentation as Software

While tools like Cursor already index your codebase to find connections, this approach isn’t perfect. I’ve found it’s far more effective to have explicit indexing of context. This solves the frustrating problem of starting a new conversation only to have the AI agent need to understand everything from scratch. What we want is a centralized context that allows the AI to quickly traverse documentation and understand the bigger picture.

We all know the DRY principle (Don’t Repeat Yourself), but somehow we often forget to apply this to our communication with AI agents. I see developers repeatedly falling into the same pattern: they ask the AI to do something, mistakes happen, they ask for corrections, and then the next time around, they remember the previous mistakes and include those details in their new instructions – essentially repeating themselves over and over.

I’ll admit that I was initially reluctant to include comprehensive instructions on building components. It seemed like a hassle to maintain documentation in each conversation, ensure it wasn’t bloated, and remember which files to include for what purposes. And you know what? It is a lot of work. We’re going to end up with extensive instructions over time, and managing them can be challenging. But I’ve come to believe that this is exactly what the most efficient developers are going to master – creating systems to maintain and expand documentation effectively.

This requires a fundamental shift in how we view our codebase. The instructional layer isn’t just documentation – it’s a valid part of the codebase itself. At every layer of abstraction, you have one layer instructing the one beneath it. When you’re maintaining instructions for the AI, you’re programming the software. Those .txt and .md files aren’t just instructions; they’re integral pieces of your software.

Of course, you won’t be writing and maintaining all this documentation manually. As a developer, your role is to create systems where the AI documents, sorts, indexes, maintains, and refactors the documentation as the software evolves. What needs documenting? Everything you find yourself repeatedly reminding the AI about when performing tasks.

The documentation feedback loop can be as simple as manual feedback like “You forgot to provide the service in the service provider. Please look at startup.cs.” The AI then fixes the issue and updates the documentation by adding “When adding services, add the service in the service provider in startup.cs.” Another powerful approach is using lint errors – prompting the agent to document every error it fixes so it learns from mistakes and avoids them in the future.

2. Creating Powerful Feedback Cycles

For AI to truly become self-reliant, it needs robust self-correction mechanisms. I’ve found two particularly effective feedback cycles:

First, there’s the power of lint errors. The Cursor agent already includes iterative lint-error checks on files, which is fantastic, but when coupled with documentation on how to solve previous lint errors, it becomes incredibly powerful. This has led me to strongly prefer languages with robust lint error messages. While strongly-typed programming has always made development easier, AI takes this advantage to another level.

In my front-end application, I’ve taken lint errors very seriously. I use Reinforced-typings to generate TypeScript interfaces from C# classes for all my DTOs, and it even generates interfaces for all controllers with correct return types and required payload DTOs. When I add a new endpoint or change a return type DTO and build the project, I immediately get linter errors in my front-end service class. These errors cascade through my components, forcing everything to align with the new changes.

I’ve also moved away from magic strings for localization, implementing strongly typed keys using TypeScript. When the AI develops new features and enters keys that don’t exist yet, it reads the lint errors and knows it needs to add or find existing keys in the translation schema interface. Since the en.ts, es.ts, and fr.ts files implementing the translation schema don’t contain the new keys, we get compile errors. Thanks to what I call the “cached instructions pattern,” the AI remembers to add new translations to all files, automatically translating them using its language model. This allows for lightning-fast development despite the complexity.

The second powerful feedback mechanism is automated testing. Unit tests, integration tests, and end-to-end tests provide feedback that goes beyond lint errors to catch logical issues. While these tests might become numerous over time, we need to embrace this new paradigm where the focus is on making a codebase that enables self-reliant, automated development.

I’ve found that combining Test-Driven Development (TDD) with AI agents (with yolo mode on) is incredibly powerful. My development process often looks like this:

  1. I ask the AI to create unit tests for new functionality, not worrying about compilation errors or test failures
  2. The AI creates the tests
  3. I ask it to run the tests, analyze the results, and fix any failures
  4. The AI runs the tests, reads the output, makes changes, and repeats this cycle
  5. Often, I’ll return to my computer to find it has run through multiple iterations and the code is now perfect

3. Embracing Convention First

Since AI is built upon reinforced data, we should lean into mature, popular programming languages. This has always been good practice for getting help on Stack Overflow, but it’s even more crucial with AI. Furthermore, we should prioritize standard and conventional architecture whenever possible, as the AI has encountered these patterns countless times before.

4. Focusing on Domain Knowledge for Implicitness

I’ve come to view documentation through the AI’s eyes as essentially an index. While AI is phenomenal at interpreting code’s meaning, as the codebase grows, the context window must expand as well. We should use documentation to capture the essence of what our code does, creating what I call a “Domain index.”

Many developers, myself included initially, only think of AI as executing our ideas or market requests. Some use AI for brainstorming, but often there’s a split between these functions. I believe we should merge these properties, allowing AI to brainstorm and develop simultaneously through comprehensive domain knowledge coupled with specific file pointers.

Try starting by asking the AI to derive a summary of domain knowledge from your core models. The crucial aspect of domain documentation is capturing the “ghost in the machine” – the human problem or desire that the software aims to solve. When the AI understands the philosophical background of your software, the real-life domain, and how components work together to solve the original problem, it becomes fully aligned with the software’s purpose and can not only solve technical problems better but take initiative in expanding capabilities.

While it might seem far-fetched that an AI agent can be truly self-reliant, we’ve been conditioned to think only we can know when code needs refactoring or optimization. However, I’ve found that AI is smart enough to derive likely scenarios based on domain knowledge. For instance, with proper domain context, it can reason that a particular model will likely have vast quantities over time and implement server-side pagination without explicit instruction.

I undertand that setting all of this up is straining, and you would just prefer to keep explicitally instruct the AI to make the table have server-side pagination when you realize the need for it. But this brings me to the next point:

5. Developing as if the AI Was More Powerful Than It Is

Finally, here’s a strategic piece of advice: LLMs are only going to become more powerful. We should think like chess masters, seeing several moves ahead and preparing our coding practices for this future. While setting up self-validated automated testing, domain documentation, and cached-instruction patterns might seem like hard work now, it’s worthwhile to learn how to leverage AI in ways that will be compatible with increasingly powerful models.

The future belongs to developers who can create systems that enable AI to understand, maintain, and evolve software with increasing autonomy. This isn’t just about optimizing our current practices – it’s about fundamentally reimagining how we build software in the age of AI collaboration. Right now there exists three types of developers: 1. Those who are in complete denial of AI, coding everything by hand and stack overflow, 2. Those who leverage cursor and llm chats to do what they otherwise would have done, and 3, which I’ve recommended here, is to become a developer who shapes their code base for AI agents.

1 Like