The doc crawling feature is already amazing and one of the benefits of using cursor. That being said, for situations where it is buggy, I was wondering if in the future we could either upload documents or allow us in a .txt file to paste all documentation and then have cursor use that documentation information to help provide relevant context to generate more helpful code for things that are after the cutoff date.
Currently, I am not sure how cursor handles things when documentation takes up a majority of the tokens or even exceeds 8k.
@HappyQuokka I donāt think you can see how many tokens were used (except in your OpenAI dashboard if you use your own API key). But if the file exceeds Cursorās token limit it will only use the most relevant chunks from the file and display them in the āLong-file Detailsā dropdown in Chat.
It seems there is some indexing across files, that gets referenced and finds the correct ones to put into context - as I see the chat responds with āused page X, used page Yā if I tag āall docsā, so the system tries to find the correct pages from the huge list which I reference? I wonder if someone from the Cursor can chime in and shed light on this advanced ābehind the scenesā black magic . But suggestions it makes are kind of 50/50, sometimes it hallucinates over the exact same pages it just said āI referenced themā so it tries to chew them during response to query but not always correctly
very promising if it was working in a way that info from these docs is 100% utilized in generating the answer so it wonāt be able to contradict the docs or ānot noticeā the knowledge provided in them.
this approach is literally the same as pasting into chatgpt chat session, correct? thatās a āworst case scenarioā kind of workflow, for people who are looking in a new software to find āsomething better than chatgptā. But itās definitely a working method for small things and small snippets, just that weāre looking into Cursor AI as a savior from this exact workflow which we already use with chatgpt .