Support full Vive coding with agent/auto/max modes for local LLMs (e.g. Ollama, LM Studio)

:sparkles: Request:

Support full Vive Coding capabilities, including the following modes:

  • Agent mode: Interactive LLM-powered assistant that can reason and refine code iteratively.

  • Auto mode: One-shot command execution to generate or refactor code autonomously.

  • Max mode: Multistep autonomous agent that can manage entire project tasks end-to-end (e.g., scaffold a repo, implement modules, test, document).

:bullseye: Local LLM Compatibility:

Ensure the system works with local LLM environments, such as:

:light_bulb: Use Case:

Developers running open-source or privacy-sensitive projects can fully leverage LLM-based coding workflows on their local machines without depending on cloud APIs.


Let me know if you’d like a PR draft or architecture suggestion for implementing Vive Coding.

3 Likes

this sounds interesting

1 Like

I am totally in favor of this for Premium users so when we use our fast requests and apus is very slow we could use our same local API Keys or not and use And then enjoy all the features of the course This way he would certainly keep paying users and not overburden the majority’s polenta.

1 Like