By using this MCP, you can significantly improve code quality and increase the utilization of Cursor and Windsurf credits

Hope this is helpful for you.

MCP Interactive Service

This is an MCP service implemented using the FastMCP library, designed for interaction with AI tools like Cursor, Windsurf, etc. When AI tools need user input or option selection while calling large language models, they can invoke this MCP service.

alt text alt text alt text

Core Purpose

The core purpose of this plugin is to enable high-frequency communication and confirmation between AI tools (like Cursor and Windsurf) and users. It significantly improves the efficiency and effectiveness of AI interactions by:

  1. Reducing Wasted Resources: By allowing users to confirm or redirect AI’s approach before it commits to a potentially incorrect solution path, the plugin minimizes wasted API calls and computational resources.
  2. Maximizing Resource Utilization: Every API call to Cursor or Windsurf becomes more productive as the AI can verify its understanding and approach with the user before proceeding.
  3. Preventing Attention Fragmentation: By confirming approaches early, the plugin helps maintain focus on the correct solution path rather than having attention diverted to incorrect approaches.
  4. Enabling Interactive Decision Making: Users can actively participate in the decision-making process, providing immediate feedback and guidance to the AI.
  5. Streamlining Complex Tasks: For multi-step tasks, the plugin ensures alignment between user expectations and AI execution at each critical decision point.
4 Likes

ahh… this is useful… Still doesn’t help when cursor only gives 25 tool calls but still worth it.

1 Like

of course, cursor only gives 25 tool calls, but most of the time, I could only use it three to five times, now I can use it 25 times :grinning_face_with_smiling_eyes:

Interesting - how do you extract what steps the LLM is planning on taking? Especially with Gemini 2.5 where it sometimes plans out one multi-step path, realises it wont do what is needed and then completely plans another multi-step path?

a similar tool: GitHub - nazar256/user-prompt-mcp: A Model Context Protocol (MCP) server for Cursor that enables requesting user input during generation
I’ve also forked it at GitHub - normalnormie/user-prompt-mcp: A Model Context Protocol (MCP) server for Cursor that enables requesting user input during generation
which adds Vibeframe(webview) and supports Windows too,
the original project may adopt my improvements as its more customizable and cross-platform:

2 Likes

that is cool, if I had found him earlier, I wouldn’t have written one myself

2 Likes

Perhaps you could also consider using the CMD-based prompting method. This allows all steps to be completed entirely within the Cursor chat window, without any additional steps.

  1. When the Agent has a question, it will ask via CMD, and you respond within the CMD.

  2. The Agent receives your reply and continues execution.

CMD-based prompting meet basic needs. Sometimes while waiting for the AI to work, I may need to do other things, so I hope the AI can pop up a dialog box when it asks me :grin:

Actually I implement a very similar MCP server for this purpose last month.

It is implemented based on nodejs, and easier to setup. Feel free to check this out!

https://github.com/ttommyth/interactive-mcp

3 Likes

neat implementation, unfortunately in my system, UI does not appear and no errors show in developer tools, one great feature of my implementation is that it integrates inside VSCode but the downside is that it requires extra setup steps

I built an executable program that is available for testing on Windows; you can give it a try

In addition, just a reminder, the MCP supermarket time for Cursor is very short, only about 3 minutes, so please try to complete the interaction within 3 minutes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.