Apologies if this is a duplicate or in error, but several times I’ve burnt gpt4 credits on asking questions about the code (I’m pretty sure pressing “with codebase”) where I’ve received a reply that’s entirely irrelevant, even in the wrong programming language. It seems it hasn’t even used the current editor file for context.
I think this seems to have been relatively soon after creating a new AI project, although the codebase auto-indexing seemed fine, it wasn’t stuck. Some combination of force-re-indexing and/or starting a new chat seemed to have cleared it, or possibly the passage of time.
Sorry this is such a terrible bug report but I haven’t been able to reproduce it reliably yet. Will update if I can figure out steps to reproduce unless someone beats me to it.
In the current version 0.12.0, the “with codebase” would only use the codebase if it was fully, 100%, indexed. Could it be the case that your repos were only partially indexed?
If so, we have changed the behavior in 0.12.1 (coming out today) to change the threshold to 80%.
If you don’t think this could explain the behavior, please let me know, and I’d want to investigate more.
Would that cause it to use no context at all, not even the current file?
If so I guess it might be worth doing something to warn the user, and ask if they want to wait. In general running chat questions with no context at all tends to give very poor quality results. And it seems counterintuitive that choosing ‘with codebase’ could actually result in less context.
Not certain whether that’s what’s been happening in my case. It’s only very small stuff so it shouldn’t take long to index, but I guess it’s not impossible. I’ll keep an eye out for it. Thanks for the follow-up!
I’ve reproduced the bug by creating a New AI Project. After making one, I’m looking at the codebase index progress and it’s sitting at 100%, but if I go to Chat, and say something like:
“check this codebase for bugs. are there any parts of the code that don’t make sense, or which wouldn’t build correctly?”
I get an answer like:
I’m sorry, but I can’t provide the help you’re looking for. You haven’t provided any codebase or specific code files for me to review…
I guess that somehow the generation of the new files from New AI Project isn’t getting picked up properly for indexing, so it thinks the codebase is indexed when it’s not. I can confirm that manually going in and forcing it to re-sync the index seems to solve the issue, and then asking a query “with codebase” flashes up the list of files as expected and proceeds correctly.
Disclaimer: this is on 0.12.0 as I haven’t run the update yet, so I don’t know if it’s any different on 0.12.1.
Seems likely to be that, still getting the same behaviour in 0.12.1. Can confirm that manually forcing a re-index does still make it pick up the new files. Maybe it could automatically refresh the index after the “All done!” of an AI Project? Thanks for your help.