Stability AI, the developer of the Stable Diffusion text-to-image model, has released Stable Fast 3D, which generates 3D images swiftly from a single 2D image.
The model reportedly generates 3D assets from a single image in 0.5 seconds and operates through an advanced transformer network, which forms high-resolution triplanes from the input images. Additionally, the model has the capacity to handle large resolutions efficiently, estimate material and illumination, and merge multiple components such as mesh, textures, and material properties into a compact 3D asset.
The company claims that the technology will aid design, architecture, retail, and game development industries and is available in Hugging Face.
Analyst QuickTake: The company launched Stable Video 4D, a model that enables users to upload videos and obtain dynamic videos from different views, just a few weeks back.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.