Let us input our GROQ API key and use the Groq available models please!!!
have in mind that llama-3.1-8b-instant and llama-3.1-70b-versatile have for now 8k max_tokens limit but the actual context is 131,072 tokens
Let us input our GROQ API key and use the Groq available models please!!!
have in mind that llama-3.1-8b-instant and llama-3.1-70b-versatile have for now 8k max_tokens limit but the actual context is 131,072 tokens