Microsoft has introduced Phi-3 Mini, a lightweight AI model that can understand complex instructions like larger models.
The model is reportedly able to process 3.8 billion parameters and is available in two context-length 4K and 128K tokens.
Additionally, Microsoft plans to release two versions of this small model, Phi-3 Small and Phi-3 Medium, which are capable of processing 7 billion and 14 billion parameters respectively.
Microsoft claims that the model performs better than Phi-2, a small language model launched in December last year (and larger models like GPT-3.5), is cheaper, and works efficiently on personal devices. Additionally, the model is available on Azure, Hugging Face, and Ollama.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.