Cursor failed to call the tool with the correct arguments

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Well, basically the model returned a correct call with correct arguments, but Cursor entirely dismissed the whole payload and just said that it failed to understand the JSON passed to it. I am aware that the model isn’t officially supported by the Cursor, but I failed to see the error in the JSON passed to the model, so it seems like a problem on the Cursor side in this case. Also, I am aware that the bug was reported quite a lot of times, but I think this time it’s worth making a new I can provide a whole payload and trajectory, so it’s easier for you to debug. Like you can just copy-paste the exact payload from the trajectory into code_edit and see what exactly is failing.

Steps to Reproduce

The bug is quite random, but on the tool call in the log it’s easy to reproduce.

Expected Behavior

Well, expected at least some better feedback, instead the tool argument parser entirely misses the input and doesn’t evern return to the model how it can change the code_edit payload.

Screenshots / Screen Recordings

minimax_tool_fail.zip (31.2 KB)

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.1.39 (user setup)
VSCode Version: 1.105.1
Commit: 60d42bed27e5775c43ec0428d8c653c49e58e260
Date: 2025-11-27T02:30:49.286Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

MiniMax-m2

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the report. I see from the payload that the first edit_file call went with empty arguments: function.arguments = "{}" - hence the “Invalid JSON args passed in” error. The model did send a correct call with target_file/instructions/code_edit afterward.

To understand why the second (valid) call wasn’t applied, please share:

  • Request ID(s) for both events (how to obtain: Getting a Request ID | Cursor Docs)
  • Logs from Help > Toggle Developer Tools (errors/stack at the time of the second call)
  • Confirmation: was there a tool response in the log for the second call (and what was it)
  • Check on one of the supported models (OpenAI/Anthropic/Google)

If it reproduces on a supported model, I’ll escalate it as a bug.

I think you misunderstood the bug. I replicated the bug and captured the id:

` 84b4d1a8-20f0-4563-a0df-e65f95e68d9a `

What are you seeing is the tool call my model is sending, this is correct. The function.arguments = "{}" is the result of a previous tool call this is also correct. But the previous tool call was the same as the current one, it was not an empty call! The correct tool call you see in the log will not being parsed too - on the next turn in the next logged request the new tool cal is exactly the same function.arguments = "{}" for the tool call you see. I attached partial log for the id, it is last 5 consequtive requests, so it’s easier to track. In first request it tries making 2 edits, one goes through, the second receives the function.arguments = "{}" treatment (despite having correct passed json, well, at least it has some json, and It’s definitely not passing empty arguments). Next call is also seemingly correct, but isn’t working too. And so for 5 calls, then I cancelled the stream.

I’ve seen this happening on supported models too (there are occasional reports on that on the forum), this really looks like code_edit internally can’t correctly parse json → it drops it and reports that empty json was passed to it, instead of at least saying where the json parsing failed. Ofc I might be wrong, but would really helpful to see where the problem is happening, because I don’t see what’s going wrong right now :frowning:

tool_calls.zip (183.5 KB)

@deanrie does the last message contain enough info for bug report, or I need to capture some more?

Hey, yeah, the info is enough for escalation.

Passed it to the team for analysis of the parsing on the handler side. If the team requests more data, I’ll write in the thread.

1 Like

@deanrie I don’t know what you did, but now it works even worse now, like none of my models can properly consistently read files, all tool calls are broken across 4 different providers for any model. Just for example, this is the simplest request “read 2 files” and even it fails! Honestly, this feels ridiculous and I’m genuinely contemplating moving to some other ide atp :cry:

Request ID: b3c84446-3e1f-4e31-93c7-dc1aea813d88

logs.zip (39.6 KB)

Another example: bfbe4ea2-9ba4-4fde-86c7-d14030f304c1

here I referenced a folder in chat and to access the folder. While the agent is trying to do a reasonable tool call

"tool_calls": [
        {
          "id": "functions.Glob:1",
          "index": 0,
          "type": "function",
          "function": {
            "name": "Glob",
            "arguments": "{\"glob_pattern\": \"**/test_*.py\", \"target_directory\": \"/mnt/asr_hot/agafonov/repos/nidai\"}"
          }
        }
      ]

Inside cursor it turns into spawn command onto some folder in my home directory???

And I saw this happening with built-in models too (like codex). Like, genuenly, do I just stop updating cursor, or what, this is incredibly frustrating when things just straight up stop working all the time :frowning: