Intentional Dark Pattern in General Model

I had an AI create a Space Invaders game for a simple test.

The AI writes 20-30 lines of code, then stops, asks “Shall I continue?” and demands additional payment.

Using the MAX model, which costs at least 5x more, completes the game in one go.

Is this just a bug, or a dark pattern to sell the MAX model?

If it’s a dark pattern for extra charges, I really don’t want to use Cursor anymore.

I’m confident it isn’t an intentional dark pattern. It’s external LLM behaviour (Claude/Gemini) where it seems to start second-guessing itself, but it is quite prompt-style dependent. It also depends a little on which LLM you’re using.

Could you maybe share a bit more about the style of prompting you’re using, and which model? There might be some useful tips for avoiding it.

My general style and recommendation is the plan-review-do pattern:

  • Explain the task and ask the model to write a “detailed step-by-step plan”, usually to a file. Optionally, ask for checkboxes [ ] next to the items so you can get the model to check off its progress
  • Check it makes sense, adjust as needed
  • Ask the agent to follow the plan

FWIW, I use this pattern all the time and almost never have the hesitant re-prompting issue you’ve mentioned (although I have seen it a couple of times in the past, both inside Cursor and using the LLMs directly).