Groq inference speed

with groqs inference speed it is clearly the future for code assistants

how is cursor progressing on getting a deal with them?



I just add Mixtral to Cursor as a model…

Although it’s currently saying “does not work with your current plan or api key” :rage:

Do you know if the output quality is the same (compared to GPT4/Opus) with faster models? If it is noticeably less I would gladly keep waiting a little longer to get a more useful answer while coding.

mixtral is not that reliable sadly. we will try to make llama70b good with prompting. hopefully that will be good for everyone.