Hi @nedcodes
We use Cursor on a huge mono repo, with my team
Agents, hooks, rules, and skills must be tracked in Git so you can iterate on them and easily distribute them to your team.
One caveat: depending on your Git strategy (for example, trunk-based development), your team can lag behind when you introduce new rules or update agents. We use trunk-based development, so the “time to master” for users is very short.
Humans are humans, they don’t change. You need to have your company’s agent “harness” ready, and your AI feedback loop must be solid. The AI should be able to lint, run tests, and perform changes on your codebase without encountering issues like port conflicts or missing dependencies. A localhost-first setup is a strong advantage.
You need a very clean repository where everything can be started with a single command, and where code generation parts are properly handled in each app’s startup process. Once this works, the magic happens, and adoption becomes much easier.
From a team perspective, things break quickly if your contribution workflow is not AI-friendly. If the AI cannot iterate on the project and run tests without human intervention, productivity drops. For example, frontend applications need proper end-to-end testing (like Cypress). You need strong test coverage: unit, integration, and E2E tests.
Progressively, you’ll realize that the review process on GitHub or GitLab becomes the bottleneck. You may lack reviewers, depending on your team’s experience level. Some team members who were strong at “producing” code will need to become strong reviewers and that transition is not easy for everyone.
In the long term, the review process becomes the main constraint, but it must not be bypassed. AI-generated code needs to be reviewed. You can add AI review tools like CodeRabbit, Cursor BugBot, etc., to help catch technical issues, but you will still need human review if you don’t want to put your company at risk.