The chat or composer still doesn’t yet support images.
Whether just I encounter that or the all users have the same issues as me?
Hey, DeepSeek models initially don’t have vision capabilities, so at this stage, it doesn’t work yet. I hope they’ll add vision to their models soon, and it will become possible.
I’ve tried providing images on the Deepseek website or mobile application using either Deepseek V3 or Deepseek R1, which can easily change from image to code.
According to this article, DeepSeek doesn’t support images through the API, but it does with additional support from the provider. I think we’ll implement this in the future.
Thanks for your information. I can’t wait for the Deepseek - vision in the API.
They use OCR to extract text from the image, the model itself doesn’t have vision capability. You can test this by uploading an image without any text in their UI, it’ll say something like “no text extracted”
To add on to Dean’s message, we believe DeepSeek is not currently multi-model but instead runs your images through an OCR layer first, and provides that text to DeepSeek’s model, which isn’t the same as gpt-4o being multi-model!