Why the push for Agentic when models can barely follow a single simple instruction?

Similar experience here, I was religious about keeping modules under 800 lines - then I discovered a recent python module had grown to over 3,000 lines without introducing problems or errors.

It was the GPT5 models only searching the file for relevant code blocks within the file and then they isolate that context into their task without blowing out chat context.

Very unexpected, and positive results. I am not looking to refactor just based on size any more, especially if the code is working fine. The strategy for large file handling has dramatically improved.

Version: 1.7.54 (user setup)
VSCode Version: 1.99.3
Commit: 5c17eb2968a37f66bc6662f48d6356a100b67be0
Date: 2025-10-21T19:07:38.476Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26200

Yep, the old rules don’t matter anymore or don’t matter as much.