I’m looking for MCP that would let the AI ask me for clarification/help in the middle of its thinking/answering proccess.
For example AI to have the tool (MCP) to follow this meta prompt:
Do not finish responding until explicitly told to do so, instead always use MCP input_user_prompt and wait for response in all cases, either you feel stuck, you have question or you finished work on a prompt - always communicate with the user using this MCP.
Just curious — why do you prefer using MCP to handle clarification or follow-up questions instead of simply interacting through Cursor’s built-in chat window? Is there a particular reason or benefit you’re aiming for with that approach?
It can save time and tokens, e.g. if the LLM in the very beginning spots an issue with my prompt - it can ask, get an answer and continue - it may even interview me with several questions as a conversation. That way I will use just 1 request (with many tool calls), spend less time waiting, less tokens written by AI and read by me.
Also sometimes the LLM does a good implementation for steps e.g. 1.-3., but gets stuck in a loop (fix-error-fix-error…) on the 4.-th step. Asking me a question may get the LLM out of the loop, preserve its good code (steps 1.-3.) and even do step 5. - all in single request. (E.g. I may say - revert step 4. - do not do it, just try step 5. and you are done for now.)
Very interesting approach, this would partially also work with multi-agent / manager agent flow where parts are sent to one AI that analyzes code or reviews suggestions while other is still processing the main task.