Guidelines for getting entire files into context?

I’m working with some larger files and find that both o1-mini and claude-3.5-sonnet tell me they only see parts of them, what they have is a “snippet” and tell me I need to give them the entire file - which I thought I already had.

One example I have is a 2126 line file that is 83Kb with coffee/javascript. When I ask questions about specific methods or parts of the code in a Composer interaction, I often get responses like:

While you’ve provided a partial snippet of overlay.coffee, here’s a high-level understanding based on the provided code: …

Are there guidelines or best practices for how to format, structure or limit the size of files to maximize the chance the models can parse and understand them?

you could try using a larger context model like Google’s gemini?

can you check how many tokens is the file you mentioned?

https://platform.openai.com/tokenizer

you could try splitting the docs too if you dont want to use gemini

For the file mentioned above with 2126 lines and 83KB, the tokenizer reports:

Tokens: 20,431
Characters: 82749

I think sonnet 3.5 should be able to handle that easily.

Not sure if Cursor only takes a small part even from a small file. That would be weird.