It’s often very helpful to actually see what’s going into the context windows of every single inference call. Cursor initially gave me a large boost in productivity. But as I tried using it for more and more sophisticated things, it became harder and harder to steer the inferences. It would be extremely helpful if there was some way for users to see what’s going into each inference call to more effectively prompt the model. This is especially true for the “agentic” experiences where subtle underspecification can have very annoying consequences.
This is actually becoming extremely relevant for understanding the interaction of .cursorrules and my generations. For example, I just added a very specific instruction to .cursorrules that got totally ignored. But because the inference call isn’t available, it’s hard to debug whether this is due to my prompt, the model or the context window that ultimately got fed into the inference call.
Yes, this would be very useful. It’s always a complete gamble what the AI will see and how much of the existing composer messages will actually get included.
Hey, when you update your rules file, it doesn’t always load immediately. I think it loads at the start and sometimes during the process, so you need to either add the updated file to the context or restart Cursor, or start a new session.
Agree