Hi there,
I was encouraged by a friend to use Cursor, due to the touted whole codebase inference. Signed up this morning and gave it a try with a project.
Unfortunately, I saw no difference, actually, less quality than simply GPT-4 with an uploaded codebase zip.
ie, you can see from my chat transcript for account with matching username, that there was no AI knowledge of main libraries being used within the project. It would hallucinate with simple, direct request to use methods from that library. The editor’s intellisense-type autocompletion was correct in its understanding of the library’s method, but the AI refused to analyse the library’s files to use the correct methods, instead, making up non-existant methods.
For the price and advertised whole codebase AI assistance, I was expecting more than what is achievable with GPT-4 alone and it fell short of expectations.
GPT-4 with a whole codebase zip uploaded doesn’t do much better, but at least, I can then upload a zip of the library in question and get it to analyse that and eventually suggest code based on existant methods in the library.
IMO, Cursor should be doing this, either with existant knowledge of public source vendor libraries based on a dependency file, or to simply scan all the files in the project, as it does for the intellisense-type autocompletion.
That there were minimal GPT calls showing after my operations in my account quota, it simply seems not to be sending enough context in the prompts or not handling the local knowledge of the codebase well in the AI integration.
If you see an obvious error in my understanding, happy to try and use the product again, else would like to request a refund as it’s not usable for my needs.
Cheers,
Shih