Why does Cursor keep lose context and giving random answers?

I’m asking questions regarding to my code and it says “it might be named x or y, or can be located at somewhere” etc. It also says after unsuccesful updated, it should be looking for where the logs actually are instead of “guessing”.

Isn’t Cursor all about having my codebase in the context instead of me adding 10s of files to chat context all the time?

How can I make sure if my code has context all the time?

Here is an example answer:

what part of my code controls logs

Thought for 3 seconds

Logging in Your Code

I don’t have access to your codebase, so I can’t identify specifically which part of your code controls logs.

Generally, logging in applications can be found in:

  1. A dedicated logger configuration file (like logger.js, logging.py, etc.)

  2. Middleware components (in web applications)

  3. Utility modules that wrap logging functionality

  4. Configuration files that set up logging levels and outputs

If you’d like me to help you find the specific parts of your code that handle logging, you might want to try agent mode, which can look through your codebase to find relevant information. You can select this in the input box.

Alternatively, you could share specific files you think might contain logging code, and I can help analyze them.

1 Like

Same thing always happend to me but no answers

(Copy of reply in another thread)
Completely understand the frustration.

I can’t completely explain it. It seems like some things are forgotten in several situations. Starting a new chat can do it. Switching Models seems to do it, sometimes.

I’m trying to put important things like this in the Rules that get attached to every new Chat. Now I have Rules about making new Rules correctly. But it’s helping.

We have to learn more about AI contexts and understand more about what Cursor sends to the models.

Good luck! I’m right there with you.

JBB

I am experiencing the same issue repeatedly and it is quite frustrating; at this point, AI (at least in Cursor) feels more like a time waster than a tool for enhancing productivity.

Agent mode appears to lack any real understanding of context. It’s useful for kickstarting a project and setting up initial files when there is no existing context, but after that, it feels as if you’re beginning anew with each prompt. It doesn’t recognize the files that are already there and fails to review them, even when specifically asked to look at several files before responding. Instead, it tends to glance at a few lines from one or two files and makes assumptions about the rest. Moreover, it generally does not adhere to instructions, especially with Claude 3.7, which often opts for a workaround instead of fixing a bug. For instance, while working on a client-server implementation and trying to resolve an API-related bug, after one unsuccessful attempt, it decided to mock the server response directly in the client! As the codebase expands beyond a few files, it becomes a complete time sink; it doesn’t attempt to grasp the existing implementation, has no recollection of previous actions or instructions, and literally invents new content with each prompt. Recently, while working on another project, I had a few successful prompts regarding one file, then shifted to a different task in another file for just one prompt. When I returned to the original file to request a change, it responded as if it had never seen that file before, showing no historical context.

I’ve found that Edit mode can be more effective when used with the appropriate files, I assumed this is because it has a more concentrated context, though it is still has the same issue with historical context and not very practical for larger projects.

While the code generation capabilities are undeniably powerful, they are largely undermined by the current severe lack of context. I would expect, at the very least, that it retains a record of recent prompts (ideally a historical context for the last ten prompts or so) and that it checks and reviews the existing implementation along with all implications of the proposed change before proceeding to create new content with each prompt.

Honestly, this feels more like a toy and it would be even funny if I weren’t paying for it.

I notice clear degradation of responses when the Chat thread is running longer. The models start to hallucinate as the thread wont fit anymore into the context.

Sure some models like 3.7 and thinking are more prone to such issues which is why Cursor added the MAX mode with larger context.

For those that imagine cursor would have the full codebase in the contest this is not realistic as most model do not have that large context and those that do have an issue when the context gets fuller as well. further on in the chat.

As none of you explained what you actually tried or provided request IDs for the Cursor team to investigate, what you wrote is a rant and not a bug report.

My experience is not perfect but like said there are things that help a lot to prevent this and I do that regularly:

  • Use new chat if a chat gets longer, that prevents the back and forth between you and the model incl. all the steps model did so far in the thread from confusing the model.
  • Structure the project well: SOLID and DRY principles prevent most issues in such cases, also that allows the model to get back on track easier.
  • Use AI to plan task details in one thread and let it write to a .md file the plan. Then in new thread ask it to implement that plan.
  • If you provide it commands for testing it can step by step resolve issues or prevent them from becoming big.
  • Good understanding how Cursor works as combination of prompt, attached files, tool usage, MCP servers attached, rule files attached or used, searches or edits performed on code etcs… this all contributes to context size which has limits. A user complained in the forum why cursor shows the message that a new Chat should be started due to context limit, and this is exactly why they have to do it.
  • There is also a Large Context setting etc… that may help if your files are larger or the size of task, rules, code etc goes to the context limit

Sorry if im sounding harsh. Not my intent but its well known it helps if you understand what is happening in the chat.

Also none of you shared what settings you use or what you all attached or how many steps you took. etc… Not sure how anyone in the forum would know whats going on. And i apologize for my rant :slight_smile:

You are completely right that Edit works better in many cases as it doesnt have any MCP, tools etcs in the context. and doesnt need to analyze what tools to use to perform the task or which tools to choose which Agent does need to do :slight_smile:

I can assure you that I have tried them all: .md files, plans, starting new chats, large context, MAX version, rules and everything else that you mentioned. There is no need for a response ID either, you can take any non-trivial codebase that has more than 10 files and you can see how at every prompt everything is assumed and made up with practically zero reference to existing context or even the past few prompts.

I understand perfectly the limitations on context size but the main issue is that it appears context it not even being used, probably because what Cursor feeds as context to the AI models is in-fact a highly superficially summarized version of the context. It appears also that the mode of operation is to assume first then fix later rather than systematically understanding the existing context and then generating the changes. I end up reminding it at every prompt to review the existing files first and that seems to improve it somewhat but still not really effective.

This is somewhat model dependent, it is much easier to keep Claude 3.5 focused, Claude 3.7 is better at coding but hallucinates at every other prompt.

I’m convinced the issue is not with the models themself but rather with what Cursor is feeding as context to the models. The same task given directly to ChatGPT with some context files uploaded is way more context aware than the same task performed in Cursor.

Yes i agree that i have much better success with 3.5 than 3.7.

Im not claiming Cursor is perfect :slight_smile:

But comparing with ChatGPT which has no codebase access and agents locally is not 1:1 context size even for same model.

Does it happen also when you attach same file with @ in a fresh chat? which would make it more close comparable to ChatGPT though the context and APis are a bit different.

Thanks for the detailed update.

Attaching files in Agent mode (including on a new chat) it seems as if it glances at the attached files but then goes on to do its thing anyway. Attaching files in general gives better results but on non-trivial projects (even if the task itself is trivial) it doesn’t seem to make that much of a difference.

In most cases it feels like trying to delegate tasks to an arrogant developer with severe dementia issues :slight_smile:

1 Like

Hmm i see, sorry to hear you have that issue.

I have similar cases sometimes but mostly its after a few back and forth with complex Agent usage which adds a lot of context. So in my case a new chat helps or why attaching the file helps in other of my cases then.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.