Missing required parameter: 'tools[0].name

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

When I configure OpenAI’s API key in Cursor and use Cursor’s agent mode, it returns the error message 'Missing required parameter: ‘tools[0].name’. No matter whether I switch to plan or ask, this error message still appears.

Steps to Reproduce

configure OpenAI’s API key in Cursor and use Cursor’s agent mode

Operating System

Linux

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.2.43
VSCode Version: 1.105.1
Commit: 32cfbe848b35d9eb320980195985450f244b3030
Date: 2025-12-19T06:06:44.644Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Linux x64 4.19.112-2.el8.x86_64

For AI issues: which model did you use?

GPT-5.2

Does this stop you from using Cursor

Yes - Cursor is unusable

2 Likes

Hey, thanks for the report.

This error is often related to MCP servers. We also saw a similar issue recently with GPT-5.2 in Agent mode.

Please try the following:

  1. Disable MCP servers (if you have any configured). Check Cursor Settings → Tools & MCP, or your .cursor/mcp.json.

  2. Run Network Diagnostics: Cursor Settings → Network → Run Diagnostics, then share the results.

  3. Enable “Disable HTTP/2”: App Settings Ctrl+, → search for “HTTP/2” → turn on the option.

  4. Make sure you’re using the official OpenAI API endpoint, not a custom proxy or overridden base URL.

Let me know whether disabling MCP servers helps. If the issue persists, please share the Network Diagnostics output and confirm whether you’re using Override OpenAI Base URL.

Thank you for your reply. I used tools to debug the request data and analyzed the request content using chatgpt. The results are as follows.

———

Payload fields you sent (top-level)

user, model, input, tools, tool_choice, store, stream, prompt_cache_retention, metadata, reasoning, text, stream_options

———

Compatibility matrix (what’s standard where)

Standard for /v1/chat/completions

  • :white_check_mark: model
  • :white_check_mark: tools (concept is supported, but your tools schema is wrong; see “Tools schema” below)
  • :white_check_mark: tool_choice
  • :white_check_mark: stream
  • :cross_mark: input (Chat Completions uses messages, not input)
  • :cross_mark: user
  • :cross_mark: store
  • :cross_mark: prompt_cache_retention
  • :cross_mark: metadata
  • :cross_mark: reasoning

Standard for /v1/responses

  • :white_check_mark: model
  • :white_check_mark: input
  • :white_check_mark: tools (concept is supported, but your tools schema is wrong; see “Tools schema” below)
  • :white_check_mark: tool_choice
  • :white_check_mark: stream
  • :warning: metadata (commonly supported; verify with the exact API version/provider/gateway you use)
  • :warning: store (may exist in some setups; verify)
  • :warning: reasoning (model/SDK dependent; verify)
  • :warning: text (model/SDK dependent; verify)
  • :cross_mark: prompt_cache_retention (not standard in public OpenAI schema)
  • :cross_mark: stream_options (not standard in public OpenAI schema)
  • :cross_mark: user (generally not part of the standard Responses request body; use metadata or your own app layer instead)

———

Tools schema (your biggest breaking issue for BOTH endpoints)

Your tools entries are shaped like:

  • tools[i].type = “function”
  • tools[i].name
  • tools[i].description
  • tools[i].parameters

OpenAI expects function tools shaped like:

  • tools[i].type = “function”
  • tools[i].function.name (required)
  • tools[i].function.description
  • tools[i].function.parameters

Also, you included a tool with:

  • tools[i].type = “custom” (e.g. apply_patch) → not an OpenAI-native tool type.

This mismatch is why validation fails with errors like Missing required parameter: ‘tools[0].name’ (depending on the gateway/version, the validator may reference tools[0].name even though the actual correct field is tools[0].function.name).

———

I’ve already removed MCP and am using HTTP/1.1. Network diagnostics show no problems. It seems the cursor’s request parameters aren’t very accurate. I can configure the OpenAI API key in tools like Chatbox without any issues. I’m not sure if ChatGPT’s analysis is correct, but I don’t want to waste time on this usage method. Thank you.

1 Like

Thanks for the detailed technical analysis, your conclusions are absolutely correct.

This is a known issue when using GPT-5.2 in Agent mode with a custom OpenAI API key. Cursor sends requests in the OpenAI Responses API format (which uses fields like reasoning, input, metadata), but the tools schema doesn’t match the standard format. It uses tools[i].name instead of tools[i].function.name, and it also sends custom tool types (for example type: "custom" for apply_patch) that aren’t supported by the standard OpenAI API.

The team is already working on this, but for now Agent mode with GPT-5.2 via BYOK OpenAI API isn’t properly supported.

Sorry for the inconvenience, we’re working on it.

1 Like

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.