Enhancing AI's Ability to Seek Clarification

The AI in Cursor is intelligent, but there’s room for improvement.

Currently, especially when using Composer, if a user believes they’ve provided clear instructions but the AI misinterprets them, two significant issues can arise:

  1. Users who fully trust the AI’s output may encounter unexpected problems during execution.

  2. Users who are skeptical of the AI’s work may need to meticulously review each line of code, leading to a suboptimal user experience.

Most responsible developers likely opt for the second approach. This post aims to explore potential solutions and ways to make the interaction more efficient.

One idea that comes to mind is encouraging the AI to proactively ask users questions when it encounters concepts it deems crucial to the task but lacks clarity on. (If the AI is smarter, hopefully, it would ask “Should I also do xxxxx for you?”)

I initially observed this pattern in Perplexity.ai, though I’m unsure why it was subsequently removed.

Potential Benefits

Implementing this feature could:

  • Reduce misunderstandings between users and the AI

  • Increase the accuracy of the AI’s output

  • Enhance user confidence in the generated code

  • Streamline the overall development process

  • Decrease server overhead by significantly reducing ineffective communication and unnecessary coding

It’s worth noting that understanding exactly what the consumer means is crucial for any competent developer, whether human or AI. This concept extends beyond AI-human interactions and is deeply rooted in customer communication and software engineering management practices, even in human-to-human interactions. By implementing this feature, we’re not just improving AI functionality, but also mirroring best practices in software development

(By the way: I just discover a paper may be helpful for Cursor to enhance the reliability of AI: [2409.03733] Planning In Natural Language Improves LLM Search For Code Generation (arxiv.org))