All Updates

All Updates

icon
Filter
Product updates
NVIDIA launches open synthetic data generation pipeline to train LLMs
Generative AI Infrastructure
Jun 14, 2024
This week:
Partnerships
Qualcomm and Google partner to develop AI-driven automotive solutions
Auto Tech
Yesterday
Product updates
Meta AI releases LayerSkip to accelerate inference in LLMs
Generative AI Infrastructure
Yesterday
Funding
Freeform secures funding from NVIDIA's NVentures
Additive Manufacturing
Oct 22, 2024
Product updates
Flexxbotics announces compatibility with LMI Technologies for quality inspection
Smart Factory
Oct 22, 2024
Funding
Oxla raises USD 11 million in seed funding to drive commercialization
Data Infrastructure & Analytics
Oct 22, 2024
Product updates
Cohesity enhances Gaia, its AI assistant, with visual data exploration and expanded data sources
Data Infrastructure & Analytics
Oct 22, 2024
Product updates
Finzly launches FedNow service through BankOS platform in AWS marketplace
FinTech Infrastructure
Oct 22, 2024
Product updates
Runway launches Act-One for AI facial expression motion capture
Generative AI Applications
Oct 22, 2024
Product updates
Ideogram launches Canvas for image manipulation and generation
Generative AI Applications
Oct 22, 2024
Partnerships
UiPath partners with Inflection AI to integrate AI solutions for enterprises
Generative AI Applications
Oct 22, 2024
Generative AI Infrastructure

Generative AI Infrastructure

Jun 14, 2024

NVIDIA launches open synthetic data generation pipeline to train LLMs

Product updates

  • NVIDIA has launched Nemotron-4 340 billion, an open family of models developers can use to generate synthetic data for training LLMs for commercial applications across various industries. Nemotron-4 340 billion can be accessed for free and is scalable.

  • Nemotron-4 340 billion consists of base, instruct, and reward models that form a pipeline. The instruct model generates diverse synthetic data mimicking real-world data, while the reward model filters and grades responses for quality attributes like helpfulness and correctness. The base model can be customized using proprietary data.

  • NVIDIA claims the open pipeline enables developers to build powerful LLMs by generating high-quality synthetic training data, which is often expensive and difficult to access. The models are optimized for NVIDIA NeMo and TensorRT-LLM for efficient training and inference.

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.