Researchers from Meta and the University of Oxford have announced VFusion3D, a GenAI model designed to generate high-quality 3D objects from single images or text descriptions. The system aims to enhance 3D content creation in fields such as VR, gaming, and digital design.
VFusion3D uses pre-trained video AI models to create synthetic 3D data, enabling the generation of 3D assets from a single image within seconds. The model's key features include multi-view video sequence generation, high human preference ratings compared to previous systems, and scalability for continued improvement as more 3D data becomes available.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.