Helm.ai, a developer of advanced driver assistance system (ADAS) software, autonomous driving software, and robotics, has launched VidGen-1, a generative AI (GenAI) model designed to produce realistic video sequences for autonomous driving development and validation.
VidGen-1 can generate driving scene videos with varied geographic, vehicle perspective, and camera types, featuring realistic environments and behaviors such as vehicle movements, pedestrian actions, and adherence to traffic rules. The generated videos encompass various weather conditions, lighting effects, and even detailed reflections, providing comprehensive scenario coverage.
This model is trained on extensive driving footage, using unique neural network architectures and unsupervised training methods.
Analyst QuickTake: This innovative AI technology follows Helm.ai releasing a new high-fidelity virtual scenario generation model based on neural networks in April 2024. This was also geared toward enhancing AI software for developing ADAS and autonomous driving systems.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.