Manual ChatGPT usage often better than from within Cursor - why?

I often have simple code changes which do not work when done through cursor, but which work fine if I instead paste the code into ChatGPT and make my prompt the same thing that I gave to cursor.

The IDE is set to the same model that I am using, so I’m curious if Cursor uses some systems of internal prompting that may cause this.

Does anyone else experience issues like this?

I find myself still having to tab to ChatGPT much more frequently than desired as it has a much higher rate of modifying my code in the way I intend

Is this for GPT-3.5 or GPT-4? ChatGPT uses the new GPT-4-Turbo model which might result in differences in answers. Could you send an example?

1 Like

ChatGPT has knowledge up to April 2023. Cursor is still running the API model from 2021 (which apparently right now is the only option if you need to make a lot of programmatic requests). Depending on what you are doing, having 1,5 years of more recent knowledge weighs heavy.

It does not really weigh heavy, the logic is more important than the current knowledge. If you need current libararies, that is why you can index docs. The model from ChatGPT is a new recent GPT-4 Model but I still think the one cursor uses is good, you can give it current knowledge by indexing it.

1 Like