London-based Haiper, a developer of video GenAI technology, has launched Haiper 2.0, a new video generation model. This model is an upgrade to its earlier release and is designed to create short clips based on user prompts. The company offers cloud services for consumers to generate short videos.
Haiper 2.0 is based on a DiT (Diffusion Transformer) architecture, which combines diffusion and transformer neural network designs. The model generates clips faster and in a more realistic style, with an upcoming update expected to increase video resolution to 3,840 by 2,160 pixels.
The company also plans to introduce Video Templates alongside Haiper 2.0, making it easier for creators to explore new use cases without complicated prompts.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.