New MCP — fetchsandbox: test API integrations your AI agent wrote (50+ specs, 30s install)

Trying to make my Cursor agent test the API integrations it writes. Ended up shipping fetchsandbox-mcp.

Above: the markdown report the agent writes to my repo after running a Stripe payment workflow end-to-end. Mermaid diagram of every step. Each <details> block has the full request/response JSON. Webhook payloads at the bottom. Commit-able.

Setup in Cursor (~30s):

  1. npm i -g fetchsandbox-mcp

  2. Paste config in ~/.cursor/mcp.json:

    {
      "mcpServers": {
        "fetchsandbox": {
          "command": "npx",
          "args": ["-y", "fetchsandbox-mcp"]
        }
      }
    }
    
  3. Restart Cursor

  4. Toggle the server ON in Settings → MCP (Cursor disables newly-added MCP servers by default — catches everyone the first time)

  5. In chat: validate stripe checkout with fetchsandbox

The agent will:

  • Import the Stripe sandbox

  • Ask you to confirm scope

  • Run the checkout workflow against a stateful sandbox (real state, real webhooks, no prod quota burned)

  • Write the markdown report to .fetchsandbox/validation-stripe-*.md in your repo

What’s on it:

  • 50+ pre-validated specs: Stripe, GitHub, Twilio, OpenAI, Paddle, Polar, Clerk, Resend, GitLab, Notion, and more

  • Available via npm + on the Anthropic MCP Registry as io.github.fetchsandbox/mcp

  • Full IDE walkthrough at https://fetchsandbox.com/install (also covers Claude Code / Cline / Windsurf / Codex)

  • MIT, telemetry opt-out via FETCHSANDBOX_TELEMETRY=0

Specific feedback I’d love from Cursor users:

  1. Is "validate <spec> with fetchsandbox" the right trigger phrase shape? Should the verb be different?

  2. When the markdown report writes to .fetchsandbox/, is that the right path / format you’d want to commit in a PR?

  3. Any Cursor-specific gotcha that broke for you during install? (Beyond the Settings → MCP toggle.)

GitHub: GitHub - fetchsandbox/mcp · GitHub
npm package: fetchsandbox-mcp