Amazon has introduced LLMs for its Redshift ML, which aims to allow customers to leverage machine learning using their data and simple SQL commands.
The new LLMs on Redshift ML permit users to create, train, and deploy machine learning models on their Redshift data. Key features include using LLMs for tasks like summarizing feedback, extracting entities, or performing sentiment analysis. Also, users can create a Redshift ML model that references an LLM endpoint and start invoking it via standard SQL commands.
Amazon claims these new capabilities significantly simplify building custom machine-learning models for GenAI tasks. It also claims that by integrating LLMs with Redshift ML, users get a simplified and efficient alternative to building and managing separate machine learning pipelines or modules.
Analyst QuickTake: Amazon SageMaker is known for addressing developers' challenges in building and training large language models. In July 2024 , it introduced new features for faster auto-scaling of AI models, allowing deployment of single or multiple models through its inference components. In June 2024 , Amazon SageMaker launched a fully managed MLflow service to streamline and improve workflows.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.