AI coding assistance should be rethunked

I think Cursor (especially the Composer) feature has a chance to standout from the pack because of the head start it has. It’s a fantastic tool, but I think the developers are getting sucked into the trap of being too much like other AI coding assistants.

The secret sauce in building an AI SWE assistant/agent seems to involve the following:

  • Base model (this could be easily swapped with any LLM available)
  • Context building (I’m guessing this is where the real magic occurs)
  • Interface/UX

I feel like the interface/UX portion is where everyone is opting for really simple things like chat interfaces. Chat is just the input mechanism for getting information to the agent, and tbh it can be switched easily for transcribed voice samples.

What I feel is low-effort is presenting the results of the LLM’s cognition back as chat. You have the opportunity to show interesting visual feedback to the user as opposed to just rows and rows of text.

Editing code in the editor is a great visual, but attaching a tooltip or similar to that code to explain it is way more useful than dumping that output into the chat log.

Additionally, when multiple files are being edited, I don’t think a small Composer window with tabs is ideal. I think you can take control of the entire viewport, and enter into multi-file editing experience with AI that is separate from the normal paradigm of having tabs open for switching between.

Also, leveraging git commits like Aider seems like a really good win.

I can go on, but I just wanted to share these initial thoughts.

1 Like