HuggingFace has launched SmolLM, a series of small language models (SLMs) with 135 million, 360 million, and 1.7 billion parameters trained on a new dataset called SmolLM-Corpus. The models can be accessed via transformers and ONNX checkpoints, and GGUF compatibility is planned.
HuggingFace claims that SmolLM models offer a balance between model size and performance, with the 135 million model reportedly outperforming the current best model under 200 million parameters.
Analyst QuickTake: This adds to the growing trend of the launch of SLMs by companies like Microsoft, Google, Meta, and Stability AI. SmolLM's release could greatly influence AI accessibility and privacy by allowing models to run directly on personal devices like phones and laptops, removing the need for cloud computing and reducing operational costs and privacy concerns.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.