Runway, a GenAI-based video creation platform, has launched Gen-3 Alpha, a new AI model that generates video clips from text descriptions and still images. Gen-3 Alpha is available for Runway subscribers, including enterprise customers and creators in Runway's creative partners program.
Gen-3 Alpha consists of improved capabilities such as faster generation speed, higher fidelity, and improved controls over the generated videos' structure, style, and motion. It is reportedly capable of generating expressive human characters with a range of actions, gestures, and emotions and can interpret a variety of styles and cinematic terminology.
The company has also partnered with entertainment and media organizations to create custom versions of Gen-3 for specific artistic and narrative requirements.
Analyst QuickTake: This news follows Luma's recent launch of its video generation model, Dream Machine. Meanwhile, GenAI video creation has seen a rise with new models from major players like OpenAI's Sora and Google's Veo, reflecting growing advancements and competition in the video AI industry.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.