Please support DeepSeek-Coder-V2

It looks very promising, and could even run locally on some machines:

11 Likes

Hi, it seems that very easy to use deepseek-coder-v2 on cursor, you can check this tutorial found in Twitter: x.com

1 Like

It seems Modelbox is still in beta and not publicly accessible.

1 Like

According to the README:

  1. SGLang: Fully support the DeepSeek-V3 model in both BF16 and FP8 inference modes.
  2. LMDeploy: Enables efficient FP8 and BF16 inference for local and cloud deployment.
  3. TensorRT-LLM: Currently supports BF16 inference and INT4/8 quantization, with FP8 support coming soon.

All of these support the OpenAI API, you should be able to replace ModelBox with any of these.

EDIT: sorry, yes, I’m looking at V3… which is probably what you’d want to run now though. :slight_smile: