Some Thoughts on Tab Auto Completion

The current hotspot in AI-assisted programming revolves around fully automated agents or CLI agents, which are being widely discussed across various media platforms. The use of new LLM models is also a major focus of attention.

All of this seems fantastic, but there is little discussion about Tab Completion, including its related next-action-prediction. With nearly 20 years of programming experience, I am not resistant to new technology trends. On the contrary, I really enjoy adopting new technologies and applying them to actual development.

In my practical development work, I heavily rely on Tab Completion while also using agents to accomplish modular development tasks. I find that these two functionalities do not conflict and each has its own applicable scenarios. Especially when working on existing projects that require targeted modifications and refactoring, Tab Completion often provides significant assistance. On the other hand, agents are more suitable for handling modular development tasks with clear dependencies and interrelations.

With so many people discussing agents, I am somewhat concerned that one day Tab Completion and next-action-prediction might fade away or no longer receive attention, leading to a halt in their optimization.

Currently, the Tab Completion feature in Cursor is already quite impressive, and I often wonder what direction Tab Completion will take in the future and how far it will ultimately evolve.

Of course, in my usage, Cursor’s Tab Completion still has some issues that I hope will be acknowledged and addressed:

  1. I’ve noticed that the latency of Tab Completion fluctuates during my use. I previously made a dedicated post to feedback about this issue. After carefully checking my network, it doesn’t seem to be related to my connection. At its worst, the streaming time can reach 5 to 7 seconds, rendering Tab Completion practically unusable. During milder fluctuations, the streaming time ranges from 800 to 1500 milliseconds, which still results in a noticeable delay. At its best, the streaming time remains between 300 to 500 milliseconds, making programming feel incredibly smooth. I’m unsure what causes these fluctuations. After investigating various possibilities, I suspect it might be due to instability in the tab inference server.

  2. I hope Tab Completion can support a larger context to make code predictions more accurate.

Anything else? I’m not sure, but for now, as long as the latency fluctuations of Tab Completion remain minimal, I am very satisfied.