Absurd response to prompt

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

[Macbook Pro w/ M4 Max, MacOS Sequoia, LM Studio, Cursor IDE, VS Code with Cline - all up-to-date]

Is someone able to explain the following behaviour? (See attached images.)

  1. LM Studio has the qwen3-30b-a3b-thinking-2507-claude-4.5-sonnet-high-reasoning-distill-qx86x-hi-mlx model loaded.
  2. Cursor IDE setup to use LM Studio models.
  3. Set Cursor to planning mode and ask it to plan a C++ with CMake “Hello World” project.
  4. The LLM returns a plan to create a Python script using a pre-trained CNN.

Running the same prompt in VS Code with Cline in planning mode, using LM Studio with the same model/settings, Cline returns a on-point response.

(I can supply additional information if asked.)

Steps to Reproduce

  1. Setup reverse proxy to enable LM Studio to be accessed and select it as the model provider.
  2. Open a new project, select planning mode and post the following prompt: I want to write a hello world project using C++ and CMake. Please create a plan to do this.
  3. Observe the absurd response.

Expected Behavior

A plan to create a C++/CMake Hello World project.

Screenshots / Screen Recordings

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.2.35
VSCode Version: 1.105.1
Commit: 86d7e0c1a66a0a5f7e32cdbaf9b4bfbaf20ddaf0
Date: 2025-12-18T04:28:48.652Z (1 hr ago)
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 24.6.0

For AI issues: which model did you use?

LM Studio with qwen3-30b-a3b-thinking-2507-claude-4.5-sonnet-high-reasoning-distill-qx86x-hi-mlx

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

The first image shows the response using VS Code with Cline. Response is on-point.
In the second image, I point out to Cursor that its response was absurd and I want it to follow my instruction. On the second attempt, I get an on-point response.
The third image shows LM Studio processing my prompts.

Hi @napoleon_blownapart,

Thanks for the detailed report. As AI models are not deterministic the response may in cases be not relevant to the answer. Largely that also depends on how well a model adheres to prompts and how well it is able to use tool.

The behavior you’re seeing (model returning unrelated responses) is likely due to how your local model interprets Cursor’s internal prompts and tool-calling format, which is designed for specific supported providers. Any models which we did not test and confirm their performance in Cursor may return answers which are not relevant.

You are using a custom local model (qwen3-30b-a3b-thinking-2507-claude-4.5-sonnet-high-reasoning-distill-qx86x-hi-mlx) via LM Studio with a reverse proxy setup.

Unfortunately we don’t provide support for custom local LLM setups or third-party OpenAI-compatible endpoints as noted in our API Keys documentation.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.