AI Project "Cognitive Computing" bork

I think I broke New AI Project. I’m not sure what’s triggering it though. Every time I try to put in my prompt, it thinks for a moment, and then displays only Stage 0, which says:

The term “cognitive computing” refers to systems that learn at scale, reason with purpose, and interact with humans naturally. It is a subfield of artificial intelligence that strives to create computers or software that can mimic human thought processes and solve complex problems without human intervention. Cognitive computing systems use machine learning algorithms to continually acquire knowledge from the data they process. They can make predictions, generate recommendations, and improve their own intelligence over time.

… and then stops. That’s it. I assume there must be a reference to ‘cognitive computing’ somewhere in the Cursor secret sauce system prompt, and it’s deciding to lecture me about that instead of generating a project structure? I don’t know if there’s something specific about my prompt that’s doing it (it’s quite long, couple of thousand tokens) and I’m loath to just go pouring in tons of test examples just to use up all my queries. Is this a known issue, or are there any hints about what I might be doing wrong?

[Edit: Aaaand my attempts to work around this seem to have burnt almost all of my ‘monthly quota’… I’m just gonna go cry myself to sleep in a corner… Somebody wake me up when it’s November :laughing:]

I was able to reproduce this. Seems to be an issue with the length of the prompt. Please try a shorter prompt. Will forward this to the team. Thank you for your bug report!

Thanks, are you able to estimate the current maximum supported prompt length in tokens? I’d rather not trial-and-error it myself.

Okay I gave in and trial-and-errored for myself. FYI, it seems so be somewhere between about 2500 and 2650 GPT4 tokens where things break down. Shortening the prompt to ~2500 finally got me a response. Thanks

Update:
Nope, spoke too soon, it gets slightly further and generates a file list, and then when it creates the first file the conversation fails out again, with {“error”:{“code”:“internal”,“message”:“internal error”}}

This is with around 2.5k 2.3k 2.2k 1.9k tokens for the prompt.

Any advice or recommendations on how much more trimming is required would be very gratefully received.

In case anyone else is wondering about this adventure: I’ll continue to experiment but I’ve started to get passable results now that the prompt size is down to about 1.5k tokens.

If you’re not familiar with tokenization and want to figure out the size of your prospective prompt: OpenAI Tokenizer will calculate it for you. It’s all done client-side in your browser too so you don’t have to worry too much about privacy either :slight_smile:

2 Likes