Outdated Programming Knowledge for all Models

Ask Claude, GPT 4o, deepseek, and they will all tell you python 3.12 is the latest version. Some will lie harder when presented with evidence, but they will reveal that they are completely useless at writing modern code.

The problem is so bad that it has rendered 2 projects into a state of complete test failure by changing configuration because these ‘agents’ weren’t given an updated copy of github repos since 2023?!

Why are there 48 updates to cursor and not once did anyone think these models might need to know what version of javascript, python, C++ exists?

What’s the excuse for not minifying the github repos and having that as core knowledge for an application you say will help coder write code?

Could you maybe inform your customers which versions of each language your AI actually understands, or are we supposed to waste tokens finding that out too?

Thats responsibility of the AI providers like Anthropic, Google, OpenAI. They have servers which process requests to AI models and they trained all the models.

Cursor is using the models provided by those companies and has no way to update them for all the programming languages.

Some users are using the latest programming language versions or libraries/packages.

Others use 10+ year old language versions and must not break them.

Even for the same developer such information is highly project specific and many frameworks have the specific language version noted in a json config file.

On other side i rarely see projects listing all their dependencies e..g which DB and version, which frontend framework and version, etc etc.

Could you maybe inform your customers which versions of each language your AI actually understands, or are we supposed to waste tokens finding that out too?

  • There is no Cursor AI the way you assume. Its OpenAIs GPT, Anthropics Claude, Googles Gemini etc.
  • Cursor has its own models that apply edit and for autocompletion for example.
  • AI models are trained on historical data, so they do not have up to date information. You can ask the model its knowledge cutoff date.
  • This is why Cursor team has provided three solutions a) Docs where you can add your latest version of for the framework/library you need and reference that in chat, b) Web search is integrated, c) Cursor rules can be used to give focused info to AI.

You are right that everyone would benefit if the IT industry would have clear standards for this.

The language/framework I am using has this solved and code is produced in the right version.

Others use cursorrules to state the framework and version, DB and version, etc.

Small joke but we all known how bad Python version issues plague Python devs :slight_smile:

Fellow Cursor User

If I start a project in C++ 20 the first logical thing any coding tool should use is check the docs?

If I share a screenshot, a url link, and the actual text from the release of 3.13 at the very least maybe cursor could accept it as truth for the project; but I have the receipts where it told me I was the liar and tried to rewrite the pyproject file.

This time I am only creating a template project because I couldn’t trust AI with actual code…and it still broke it!

It’s not enough to just blame Claude when cursor sold us useless client and project rules that barely work, and even if I were to put the python last 2 years of dev versions minified it would ignore.

AI coding is only a useful tool if we know it’s limitation. I lost weeks of productivity trying to track down how it could destroy both my projects so impressively, that’s a complete waste of time and I will be using supergrok and not touching AI coding tools until we have some transparency.

Quick question, why aren’t you giving them the updated copy?

1 Like

Good response, but the answer and solution was not obvious. Trying now to integrate agents that sync context knowledge of documentation for code with github using context7. Success is improving, but still, it seems that at least microsoft, owning github not training their ai coding models (they own their openai and can do this) is a deliberate either field test on us coders, or a cash grab to waste our money on credits as we learn the extra hard way.

Either way, we keep figuring it out.