How do I iterate on all files automatically?

If I tell Cursor, “Inspect every test file in the entire project codebase and add a comment to the top that says ‘My hovercraft is full of eels.’,” it will tell me that it’s identifying all the test files with a proper find or such, then do one or more, but never more than a few. It then asks me if I’d like to continue.

Of course I want to continue! :slight_smile:

I can’t seem to say anything in the prompt to convince it that I want it to iterate across all test files. Or all source files. And, of course, asking me if I want to continue burns another query in my quota!

I’m certain I’m doing something wrong. Adult supervision?

There are so many options to choose from, likely im not even going to think of the most obvious:

  1. Cursor can filter files in a folder that do not contain that magic line. Just tell it to do that and add the line if not found.
  2. No its not a robot who goes through every file in one request. But you can ask it to write you a script which does go through every file and does add the line.
  3. If you ask it to add the line by itself, you should write some clear instructions like: continue checking for files which do not have that line and add it then.
  4. Cursor has an execution limit for ‘tools’ like search or edit files of 25 tool executions per request, simply because it has to do internally 25+ different AI calls that a) check what you are asking for and list steps to accomplish that b) call the tool that does try to find the file c) then pass that info to another AI call that checks if the file is the right one, if it has the line or not d) then it has to read the file likely, e) then it has to think what it needs to do and where in that file, f) next it has an AI call that edits the file and tells you about it (more than one AI call), and then the cycle repeats with file search,.. at 25 such search/read/edit tool calls it stops as more steps would pollute the context and hallucinations would occur.
  5. After 25 tool calls you are shown an option to continue with a new request which gives you again 25 tool calls with same process and limits, this continues until you are done. For now, continuing with next request is manual but people asked Cursor to automatisation of such repeat processes. Not that its always really useful as sometimes you have to work smarter and not harder.

Seriously, most professional languages nowadays have tools that let them selves edit their own code, like telling the language library/package to check and add a line to all specific files.

If yours doesnt, then ask AI to write you any script that can run on your machine which does add the magic line to all files in a folder. If your need is that simple it will work for sure, but lets assume it isnt that simple.

  1. Never answer AIs questions unless it asks you to provide specific info it didnt have beforehand. Its a parrot, that was trained to say ‘would you like me to do this for you?’ because otherwise it wouldnt sound ‘polite’ and ‘helpful’.

Therefore, tell it what to do, and if it doesnt, remind it that its task is to do this until it completes it. Dont answer questions with yes or no. If it needs specific info that it cant derive itself or couldnt know, sure give it the info and tell it to do its job. (oh my skynet is going to hate me for writing this comment, but thats decades in the future)

  1. Pick the right AI model for the job. Not all models are good for various kinds of tasks, so yo have to pick one that isnt overly creative but is good enough to do the task. For your need i think Claude 3.5 would be likely best. You can for good measure always add the requirement to make no modifications than those you asked for. (some other models need that info 1000% and could still go haywire)

  2. The longer your chat thread goes on, the more info / context is in the next request to AI which may lead at some point to too much info, causing confusion or conflict of info and then hallucinations. If you see at bottom of the chat a line saying that the thread is getting long and a button “New Chat”, use it or just start fresh a new chat on top right with the + button since the task is so formulaic.

im sure there are 20 more options i didnt just think of, low batteries i suppose.

(Edited with more ideas I came up with than in initial comment and with more detalis for better understanding)

Those are some good suggestions. I simplified the task for the sake of this post.

But let’s presume my task is something much more complex like, “Iterate across every test file in my project and apply the best practices and improvements that I have added to my rules file.”

Because that rules file gets new instructions every time I discover something useful. I could see myself running that command every few days.

But, as I said, it only executes a handful before stopping and asking if I want it to continue, which is just annoying :wink:

Great, thats what I assumed.

Lots of more causes possible, many of which are visible in other posts in the forum but might not be your causes.

  • Too much context
  • Too little context
  • Too specific changes
  • Not specific enough changes
  • Too many too specific changes
  • Wrong model (very likely)
  • Not precise enough prompt (?)
  • Contradicting conditions in the prompt
  • Contradicting conditions in the tests
  • The same in Rules
  • Too many rules
  • Not enough rules
  • Too long rules
  • Not long enough rules
  • Quirky personality instructions (talk to me like a pirate :slight_smile: )
  • Not enough persona defined
  • Too creative/independent model for the task? (Best models arent always the best for some tasks, plus 3.7 and other thinking models have high rate of imaginative thinking which hijacks the process with false thought processes)

As you mentioned that the rules get more instructions, ask AI to do a few changes at a time, NOT too many. That adds lots of things to focus on and confuse it. Break down the tasks into logical sets of steps that match and let it do those improvements.

Ideally you would want to prepare what else needs to be adjusted before you start iterating on the files. Ask AI to review the tests, for best practices or other things that could be improved for your purposes. Often a well prepared plan helps a lot to prevent ‘many small changes’.

Not sure what your prompting experience is, but likely you would want to learn more and let AI optimize prompts for the specific model that is best for that task with the expectations etc. This helps a lot to not let it just ask questions. Like a mind set.

Sure there is more possible :slight_smile:

Some conditions let the model think it did all it needed to do. Shorter chat threads help with this, so AI stays on focus.

Best is to stay technical, ask it to do specifics. Dont get emotional as that distracts your brain from staying on focus.etc…

A few more ideas, there are really countless possibilities:

  • Ask it to create a MD file with a checklist and to work off that, tell it to update the checklist when it finishes a few steps and to continue until its done. (see Plan vs Act or RIPER mode in forum threads as good ideas on how let it self manage the task better).
  • Some MCPs could help with splitting the focus or structuring steps early on (sequential thinking etc..)