I started testing out queued messages and at first it seemed great but I found an inconvenient interaction with the new agent todo list.
Previously, you could add a new message and get the agent to redirect its actions when it got off track mid flow. For example, if it started trying to debug incorrectly, I could inject some context like “the server is already running on port 4000”, and then have it continue without a full stop or restore checkpoint.
Now, with the todo list combined with queued messages, the injected message ends up as a queued message after the end of the entire todo list. So if the todo list has several steps and one of them causes the agent to get off track, I have to do a full stop or restore checkpoint rather than my previous workflow.
Is there still a way to achieve the original mid flow context injection that might be missing?
Did you test it? I am seriously asking. Because in my case, it gets stuck in “Generating…” never ending. Combined with all the drama with pricing, requests etc. you could see why people can be a bit emotional about this.
Yes, I tested it, and it works well for me. Does the generation get stuck in a new chat for you or in a continuous one? Could you also check the DevTools panel for any errors?
It’s about the flow: it is somewhere in the flow (lets say 2nd of 5 steps), you have to click stop, then you need to either write new prompt, which in my case just becomes the last step in the to do list, which you need then to stop again and click on the icon next to that one to execute it in that very moment.
When it gets stuck in generating it gets stuck no matter what. Only thing that helps in that case is to restart Cursor (MacOS). When this starts happening, the flow I described above also stops working - when you stop it and try to execute that new task (clicking the button) you fully lose the prompt, don’t see what’s there etc.
It would be helpful if the todo lists were editable. Such that I could edit the items that are created in queue by the LLM before they are executed. I assumed it would already work like this but it hasn’t for me.