Hi,
I’ve been using Cursor for 3–4 months now, primarily for its improved Copilot and tab experience. Until recently, I hadn’t found much value in AI chat/composer tools—most responses I tried were either incomplete or completely hallucinated.
That changed yesterday when I discovered the cursorrules
feature. Out of curiosity, I created a simple 20-line .cursorrules
file describing my TypeScript monorepo (5–6 packages, about 100k lines of code, with basic relationships between them). Then I asked the composer for a quality-of-life improvement, and it nailed the task in one shot, making the necessary changes across all relevant packages.
Later in the evening, I started a new project involving ML, PyTorch, and OpenCV. After about 150 premium prompts, I had a fully functional project of around 2,000 lines of strongly typed Python code. And the best part? I didn’t write a single line myself. Instead, I guided the composer, asking it to write type-hinted, well-documented code and explain unfamiliar concepts in terms of Node.js/npm (a comparison I’m familiar with).
Even though I don’t fully know Python’s syntax or ecosystem, Composer made it possible for me to guide the process, spot errors, and let it handle the heavy lifting. The functions it generated just worked—and the documentation it provided made everything easy to follow.
Maybe basic Python/ML is a “solved” problem for LLMs, but this experience has left me far more optimistic than I was two days ago. I had assumed the composer’s max context would only extend to ±20 lines or whatever it last read from the buffer. Clearly, it’s capable of much more.
The only suggestion I’d make is to add more documentation for these fantastic features. Things like the cursorrules
file, codebase indexing, and the @docs
tag still feel a bit magical. A deeper explanation of how they work under the hood would be well appreciated. I know there’s probably a secret sauce you want to keep but the docs seem too shallow at the moment.