Cursor continuously ignores the rules

I find it so frustrating how cursor black boxes the context window.

I recently copied and pasted 100 terminal lines into the chat (or composer, idr). Cursor was acting like it didn’t know what terminal lines I was talking about. So I kept copying and pasting terminal lines and asking it if it could see the lines. When I pasted above about 60 lines, cursor claimed it didn’t see any terminal lines at all. So I wonder where else it’s leaving out context. I would assume it removes old context, not context you just included.

1 Like

If you’re mean to the LLM and say stuff like this, it will act like that even more. This behavior keeps happening because you’re reinforcing it.

Reinforcing the LLM is not the problem, getting cursor to include the rules is.

Why does it reinforce so quickly when I’m being mean, and yet ignore me repeatedly when asking it to stop deleting my comments, or whatever. What’s this, selective reinforcement?

The LLM is agreeing and responding to my frustration because cursor is not doing what it said it would do. At this point in the chat it doesn’t matter what I say, the response is just agreement agreement but it’s not actually doing anything useful, so a new chat window is coming, but that won’t include the rules either.

When I prompt in the Chat or Composer, I should be talking to cursor, I’m using the cursor interface, I’m in cursor, I’m expecting cursor to somehow combine rules, @content and history context with the current prompt, so I don’t have to keep repeating everything, every time, like "don’t delete my comments, stop and think before suggesting, keep it simple”. If none of this is happening, the LLM isn’t getting the full request is it?

I’ve watched 100 videos of people claiming to have just added some rules and now it’s magically following them, every prompt. My cursor sort of did this, until I paid for it, and now I can’t get it to follow a single rule, so yeah, it ■■■■■■ me off, wastes my time, has destroyed working code and frustrates the ■■■■ out of me. Is this what’s coming to take all our jobs?

If cursor was a self-driving car, it would be out of control, bouncing off walls, people and other cars, and blaming the model. Cursor itself is pre-processing before the model even joins in, except it isn’t pre-processing as sold by the salesman, so the question here is how to get it to work as promised so the LLM can receive the full prompt?