How to use cursor for medium sized tasks

Hi,

I am interested in how ppl use cursor/LLMs in their daily lives when it comes to different task sizes.

I am using cursor ( Claude 3.7) it it works really nice for tasks like generate me a function. Most time a halfway descriptive function name plus parameters and types (in my case TS) will give you a more or less bug free version of what you want. Give it a bit of description and it will generate even pretty long functions bug free.

It also works with generating a whole feature set (backend + front end) by giving it for example just a DB schema. However the results are pretty bug ridden. Nevertheless useful to get me going.

What I am really struggling with are medium sized tasks.
For example the bug ridden large tasks (mentioned above) are kind of easy to fix however I find it extremely hard to describe the problem (mostly the bugs are scattered around the code very much) and the solution in a time efficient way. Most times it is much much easier do the coding myself.

Has anybody found strategies how to do this in a good fashion?
Links, videos or whatever are much welcome

Thanks

It’s on and off with bug fixing. Prompting the AI to fix a bug can be much of a skill as prompting the AI to build something complex. Sometimes, I could swear it was a bug the AI in previous months was able to fix after one ask. So I would go back to my old chats with the AI to see how I worded it. And even then sometimes it would still get confused to fix an issue.

On more complex bugs, I have even found the AI sometimes would be incapable of fixing the bug until you point out exactly what is causing the bug in the code. Only then would it fix that issue.

Best to compartmentalize your AI develop project in such a way, that you can isolate it from your entire project if there is an issue and let the AI solely focus on that to fix it. Then keep the context relevant to only the issue when starting a new chat to fix it.

Some of the AI written functions is too complex woven together, that it was easier to back up the files, wipe the functions and ask it to write it from scratch but to avoid that bug while it is being developed. This can be a hit or miss also. Sometimes it was easier for me to go build the function with that bug being avoided in a complete different AI platform like Websim, then bring it over back to cursor. And tell it keep note how it was properly built to avoid the bug

I am not necessary talking about bugs alone.

I am talking about medium sized tasks where the changes are scattered around the code (vs. being more “centralised”) about code changes where there exists an obvious “pattern” any human can easily spot an act on it and that can be “expressed” in extremly high level terms so that humans can search for it.

Again this is something different than for example telling cursor: secure my API Endpoints or something similar

I use a dual LLM approach. I give the main files I want to work on (could be a dozen or so, some of which have grown to 2k lines of code) to Gemini 2.5 in Google AI studio and ask it to make a plan to refactor or address a bug. Then I use the plan to prompt Claude 3.7 in cursor. Once it’s finished I share the result with Gemini so it can do a peer review. Sometimes it tells me that Claude has hallucinated or is only addressing the symptom not the cause of a bug. Using this approach today has allowed me to fix the issue with my SQL delight tables syncing with Supabase which was taking me days to try to fix before as Claude kept adding layers of workarounds without fixing the root cause.

1 Like

Compartmentalization would help, if possible.
I do some things to make context better, especially find the Notepads useful.
In every project I have a basic set of instructions “boilerplate” that are common to every project, and another one specific to the project, eg shortcut notations for common phrases (eg “SL = Shopping List”); variable naming conventions, etc.
I also have one I call features, which is a summary of the project’s features that I have the LLM generate every few days. Additionally, I generate a db schema whenever the tables change with an sql query and paste that into a Notepad. Adding key files to the chat context (rt mouse click on the file name) often helps.
Version control is crucial. I have the LLM install git in every project and whenever a significant change is finished to my satisfaction, commit the version to git, and whenever the LLM fails to make the next change with a lot of thrashing around, I just revert and start over, often with notes from the previous attempt pasted in the first prompt.

One of the short cuts in my boilerplate is “■■■” which stands for “Don’t apply any changes, let me review and apply them myself”. Another is “DGD” which means “Don’t guess, debug” which I typically resort to after the third attempt to fix/do something without any sign of progress. I usually have a set of debugging variables in my main file, so toggle display of debugging logic that is conditional on those variables/constants. Instructions on how to construct new debugging logic is in the boilerplate Notepad.

I do old school PHP, so debugging involves tailing the php error log, js alerts, and the browser’s Console for js logging. Too much time is wasted forgetting to open the Console, and missing js errors that pop up, and I’ve spent half an hour fighting the LLM.

That’s what I’ve learned/developed so far, I’m not a good programmer, nor experienced in writing good code. I slap stuff together, often sporadically over years. But AI is helping me to create better code, albeit schizophrenic as the projects bloat up.
Think of AI as a genius, functional alcoholic, working in a vast, dark warehouse with a penlight for illumination. Pack as much context in Notepads and attached files as you can, and keep telling it to not do more than you ask (unless you are willing to risk bad results for the chance of great results that you hadn’t imagined yourself). Give it some free reign if you can be ready to revert, otherwise, keep the leash tight when you really need to get something right.
Be ready to roll up your sleeves and dig into the codebase when the LLM starts going in circles, guessing at solutions.
Tell it to bracket the logic with debugging, or else it will incrementally build them for hours in the wrong areas. Be specific in telling it what parts to add debugging to.
Watch out for it creating new variables and logic, because it doesn’t know they already exist, and especially beware that if it tries to reconcile different names, it will always choose the new one and start renaming the dozens of lines with the old one, but only a handful at a time. Hit the brakes when it says, "I see the issue. Foo is named “bar”, but in this part of the code, it’s called “BAR”, and oversee the process with your own eyes and brain. Do an “all files” search for the two versions of the name to see which one should stay and which one should go.