To enhance my understanding, could a team member explain how context works with Cursor?
Is the context (e.g. my project files) sent with every chat, like how Chat GPT works? Does that mean that the VERY large context of the project is sent each time? Or does it send the current file each time? Or are embeddings created from the code context and updated as needed? etc.
Any desires to use OpenAI’s new Assistants API to maintain project state? Could this reduce token ongoing token consumption?
Are you saying that the threads feature wouldn’t help you at all, as you already have a similar enough feature in place on your servers? I’m wondering if any part of it could help reduce costs/increase context limit?
Reason I’m asking, is that sometimes it seems that Cursor doesn’t recognise previous context in chats. It seems like I’ve reached a limit, or it’s trying to cut on costs.
Generally it works well, but there are the odd occasions.