I’m a senior dev with years of experience, new to Cursor, and still on trial. I wrote a SQL-based javascript app, by hand, which took about 40-50 hours. I struggled with JetBrains new AI, watched Cline eat up my $25 in Anthropic credits and utterly failed to build a similar app. Cursor helped me do it with a mere 10 credits. I am very impressed.
I’m trying to answer the question: is it worth it to spend the time to craft comprehensive prompts, or is it better to iterate? If “crafting comprehensive prompts” is the way to go, are there any guidelines for creating them?
After a few years of using AI and prompting here is my takeaway:
Like programming you need to understand how LLMs work, how they process info, what they can do and why, or what they cant and why not.
You will see that every newer model has improvements in the course of a year.
You do not need to jumpt to the latest model but use one that is reported as strong.
Some models are better at planning and others better at coding.
Amount of prompt text including attached files, docu etc matters. the more content the harder it will be for the AI to filter out the right information.
Iterative prompting works only if you know how to catch the AI in an wrong approach and may easily just be time and requests wasted on adjustments.
Do not be afraid to go to a prior prompt in your thread and adjust that instead of asking at last prompt for corrections.
Commit very frequently so changes are not lost even if not 100% correct.
Later there then also advanced topics:
Cursor Rule files (guidelines for each step of development how and what to do, best to be short and on the point with core details, avoid length > 250 chars and write a proper description of the mdc rule file as its used to select the rule file.
Guide LLMs to write better code! funny enough better coding techniques (how to write effective, clean, precise and well structured code) help avoid mistakes for AI and humans as well.
Always write tests for all code. otherwise changes by AI on code to implement a feature or fix a bug may break your app.
YOLO mode in agent (does everything automatically but with
Most critical
Read the Cursor Documentation (due to upgrade to v0.46 its not 100% up to date but has good pointers) and try things out.
Most people complaints about cursor are due to misunderstandings of any of the above points.
(yes you will sometimes have to nudge the AI towards correct approach or why something breaks, its not magic yet).
Seriously! Thanks, your reply is exactly the type of information I was looking for.
I started writing languages and parsers in university, and all my client solutions use some sort of code generation. AI is just a better tool for what I’ve always done. I use generation to do all the basics and then use my skillset to fill in the holes the generator couldn’t fill.
With AI, I can already see it does a far superior job filling in the holes, but I think what you are saying is I have to be extra careful because AI does hallucinate (and I’ll make mistakes, too), so I should be extra diligent to ensure the AI isn’t going on some unintended tangent?
Hallucinations have become rarer but sometimes we as humans understand more of the context of the project, business rules, development and other details without having such a small context window like LLMs.
Yes the ‘manual labor’ has been simplified and now we can focus on more complex parts.
It does happen that an LLM gets stuck fixing some issues in circles, specially with more complex code. or when code is longer, or when the solution isnt straightforward. So catching it there, stopping it after it edits for the x’th time the same issue is important. Many people just ask it to continue fixes without any further guidance which wont be helpful for the LLM at all.
This can happen also when coding. as the LLMs are trained on ‘content’ that may be bad practice code etc,… Sure the models are improving every few months but you must provide guidance.
Treat the LLM like a junior developer and it will work well.
I found business rule exceptions to be the time killer in application dev. For example, we charge tax, but foreigners are exempt. It seems simple enough to say, but what a pain to implement.
Your junior dev comment is funny, as I’ve used a nearly identical analogy to describe AI to others, “fresh out of school junior who doesn’t know anything about you, so don’t assume things when chatting with them.”
Completely agree, thats why a good project structure (app structure, following best practices, NOT taking any shortcuts,…) including project requirements and implementation strategies provided to the LLM can lead to better results.
Editing legacy code is far less performant/accurate, which is essentially the same as with humans.
the current limit for code sizes being sent to LLM is 250 lines
this has to do with too large files confusing LLMs
its not necessarily about file size but also because of content that is unrelated
yes technically, larger rule files can be used
Any large thread or context is going to confuse LLM eventually
code files that are too large are not well thought through (im oversimplifying but depending on programming language and framework you can achieve a better separation of what each file does and keep the files smaller. this makes editing the file simpler, faster, no mixup with other details…)
LLMs tend to skip or forget steps if too many rules are in one file.
In general. if you have a rule file for a specific development step or language, does it need ALL the details of that language/framework/tool/process in the file? it would be most likely better to separate items that are independent into their own files. e.g. planning, development, testing, … also programming languages like js, or tools like git etc.
LLMs skip sometimes steps or details when the rules are too complex, too long, overly verbose etc. (not only in Cursor)
Cursor has access to all the rules and picks the most applicable one based on the description header in that file. (e.g. Apply this rule when developing React frontend…, or Apply this rule when testing with tool X,…)
Also for humans its better to have rules in separate files based on topic/tool/etc… than placing everything into one file.