Cursor not able to access full code base

The Cursor homepage states

Use @Codebase or ⌘ Enter to ask questions about your codebase. Cursor will search your codebase to find relevant code to your query.

But after much back and forth chatting it still can’t do it.

Note: I’m still only seeing the code snippets you’ve shared. The @Codebase command doesn’t seem to be providing access to your full codebase as advertised. You might want to:
Check if Cursor is up to date
Ensure your project is properly opened in Cursor
Contact Cursor support about the @Codebase functionality

I’m running the latest version of cursor. Added my project folder and can see hundreds of files in the File Explorer. Cursor refuses to see them.

Am I missing something?

cursor is broken, same problems come up over and over again and it gets worse almost daily

Hey, just to summarise the codebase-wide context options:

  • AI Chat - You can do CTRL/CMD + Enter for Codebase-wide context
  • Normal Composer - You can do @codebase in your prompt
  • Agent Composer - The agent is able to gather context on it’s own, but you can tell it to look at your codebase or individual files in your prompt explicitly, to ensure it does it

Is one of these features not working well for you?

Mine just say this:

Hey,

I’ve reported your issue to the team, who will investigate further - we will get back to you shortly!
In the meantime, can you make sure you are on the latest version by manually redownloading Cursor from here: Downloads | Cursor - The AI Code Editor

I have updated to 44.5 and now I get:


It’s inside a dev container.

My setup

Version: 0.44.5
VSCode Version: 1.93.1
Commit: 1d610252e6812bf33245763f0742a534fd0f1d90
Date: 2024-12-20T00:02:28.554Z
Electron: 30.5.1
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 24.1.0

It always seems to search ~50 files. I have dozens and dozens of Models but it cannot see them.

It thinks the @Codebase command isn’t valid.

1 Like

The AI is not itself aware of the @codebase command until you actually use it, hence why it doesn’t know about it in the second screenshot.

Regarding the first screenshot, I believe the search is limited to 50 files, as LLMs can only handle so many files at once! Unfortunately, LLM’s aren’t really able to take an entire codebase, look at every line in every file, and use that to rewrite code or answer questions!

However, we find this is usually not a limiting factor, as most changes people do are across a few files at the same time. @codebase will pick the most relevant files to look at, but explicitly picking your context with @ is the most reliable option!

1 Like

you guys dont advertise it like that

From the Cursor homepage:

Codebase Answers

Use @Codebase or ⌘ Enter to ask questions about your codebase. Cursor will search your codebase to find relevant code to your query.

Don’t get me wrong, it’s ok if it cannot do it but I wasted an hour chatting trying to work out why it wasn’t working. Also, being able to see the entire project would surely take things to another level. How else can it get the full picture of what I’m working on?

For me, the @codebase also didn’t work in most cases. IMO it is crucial to clearly define what it does to your clients because everyone thinks it actually covers whole project’s codebase which seems not true.

I may have not been clear about the 50 file limit, as it’s not a random, hard cap!
We have a multi-step process here, where we semantically rank all the files in your codebase based on their perceived relevance to the prompt you have written.

Therefore, the 50 files the search returns should be the 50 most relevant to your query.

Then, we look at each of those files for the sections of code that are (again) most relevant to your query. This could be the whole of 1-2 files, or chunks of a lot more.

We then pass this context and your prompt to the final LLM to return the output.

We’ve found this to be a much more effective way at providing your codebase as context to the LLM, as throwing an entire codebase at an LLM usually gives a pretty sub-optimal response.

1 Like

Is there any way to ‘hint’ our files to make that process more effective?

For example, an AI-assisted pass at putting comments in at the top of each file documenting that file’s responsibilities? And if so, is there a good way to format it for maximum effectiveness?

I’ve been trying out Agent, where it seems you can’t @ a directory (which is kinda killing my buzz, since that’s how I’ve been keeping the context focused). I’m finding I keep getting things out of the blue like “oh you haven’t got any network comms functions anywhere, let me write them badly from scratch in core/foo.code” when there’s an entire file full of them in network/sockets.code and it hasn’t noticed.

If we could add comments like “// LOOK IN HERE FOR THE NETWORK STUFF IT’S RIGHT HERE LOOK THERE IT IS LOOK” I’m wondering if that might reduce the frequency of the issue? :slight_smile:

Thanks!

I appreciate the feedback, having a way to signal to the agent composer where to look could be a good idea, especially in bigger repositories and codebases!

I’ve passed this on to the team to consider :smiley:

1 Like