Using .txt files as documentation/context + increasing token limit

The doc crawling feature is already amazing and one of the benefits of using cursor. That being said, for situations where it is buggy, I was wondering if in the future we could either upload documents or allow us in a .txt file to paste all documentation and then have cursor use that documentation information to help provide relevant context to generate more helpful code for things that are after the cutoff date.

Currently, I am not sure how cursor handles things when documentation takes up a majority of the tokens or even exceeds 8k.

For now, a solution would be opening a new file in the editor (Cmd+n), pasting contents into that file, and finally using @file to tag it.

1 Like

how can we see how many tokens what uses? I fed about 500 pages of docs into it and not sure if I did correctly :sweat_smile:

@HappyQuokka I donā€™t think you can see how many tokens were used (except in your OpenAI dashboard if you use your own API key). But if the file exceeds Cursorā€™s token limit it will only use the most relevant chunks from the file and display them in the ā€œLong-file Detailsā€ dropdown in Chat.

image

It seems there is some indexing across files, that gets referenced and finds the correct ones to put into context - as I see the chat responds with ā€œused page X, used page Yā€ if I tag ā€˜all docsā€™, so the system tries to find the correct pages from the huge list which I reference? I wonder if someone from the Cursor can chime in and shed light on this advanced ā€˜behind the scenesā€™ black magic :smiley: . But suggestions it makes are kind of 50/50, sometimes it hallucinates over the exact same pages it just said ā€œI referenced themā€ so it tries to chew them during response to query but not always correctly :woozy_face:

very promising if it was working in a way that info from these docs is 100% utilized in generating the answer so it wonā€™t be able to contradict the docs or ā€˜not noticeā€™ the knowledge provided in them.

by the Cursor token limit you mean current context of the model which they use?

this approach is literally the same as pasting into chatgpt chat session, correct? thatā€™s a ā€˜worst case scenarioā€™ kind of workflow, for people who are looking in a new software to find ā€˜something better than chatgptā€™. But itā€™s definitely a working method for small things and small snippets, just that weā€™re looking into Cursor AI as a savior from this exact workflow which we already use with chatgpt :sweat_smile: .

1 Like