Unable tor reach openAI (o1-mini model)

Im trying to debug some errors in my log but i cant submit using 01-mini (with @codebase in chat)

here’s the error i keep encountering. Has happened several times today and last night.

any solutions?

We're having trouble connecting to OpenAI. This might be temporary - please try again in a moment. Error: Request failed with status code 400

API Error: {"error":{"message":"Invalid prompt: your prompt was flagged as potentially violating our usage policy. Please try again with a different prompt: https://platform.openai.com/docs/guides/reasoning/advice-on-prompting","type":"invalid_request_error","param":null,"code":"invalid_prompt"}}

9f742ded-beff-4e39-a814-dd7a80dc2949
Version: 0.42.3
VSCode Version: 1.93.1
Commit: 949de58bd3d85d530972cac2dffc4feb9eee1e40
Date: 2024-10-16T17:56:07.754Z
Electron: 30.4.0
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.4.0

paid user, not with my own keys

bumping this

still running into this issue which is impacting my work. I use the o1-mini to plan out my feature and it keeps bugging out

please fix soon

Hi @hmoran and I am sorry for the trouble,

There’s something triggering the invalid prompt response, but it’s not clear what. It may not always be clear what can lead to flagging like that.

It can be something in your prompt or in the context. I see your context has 107 lines, any chance you could scan those and see if there’s something that could trigger flagging? Sometimes non-text content may be interpreted in a weird way.

What happens if you do not include the whole context (i.e. try smaller parts). What happens if you try that directly with OpenAI (given you have access to the model).

Unfortunately, it’s not always easy to figure out what might trigger that response.

Thanks,
Petros

thanks for the reply.

i’ve noticed that it mainly fails when i use the codebase command. It’s something i need since i use the codebase command to provide context to generate an sop/prd to start working on a new feature

could that be the main cause?

Hi @hmoran

Your codebase might be too large. Have you excluded unnecessary folders like node_modules or similar?

Here’s some info on that:

Yes I have ignore for node modules.

The thing is that it’s worked before, so it’s unclear when it decides to work

Also, it didn’t seem to give me consistent issues when I use Claude Sonnet, only the o1 models.

Any way to debug this?

I’ve already sent this bug report to the Cursor team.

1 Like

thank you

could it be that my fast responses are already used up for the month?

I don’t think it’s related because I saw a similar error in another thread, but with the Claude model.