All Updates

All Updates

icon
Filter
Product updates
Google DeepMind introduces JEST AI training method; claimed to yield 13x improvement in performance
Generative AI Infrastructure
Jul 10, 2024
This week:
Partnerships
MGI Tech and Predica Diagnostics collaborate to develop Predica's targeted RNA sequencing tests
Precision Medicine
Yesterday
Partnerships
Renalytix partners with Steno Diabetes Center to advance precision medicine solutions for diabetes and chronic kidney disease
Precision Medicine
Yesterday
Partnerships
Myriad Genetics and Personalis cross-license patents for tumor-informed cancer treatment tests
Precision Medicine
Yesterday
Funding
Element Biosciences raises USD 277 million in Series D funding to commercialize AVITI DNA sequencer and support AVITI24 launch
Precision Medicine
Yesterday
Funding
Jacobi Robotics raises USD 5 million in seed funding to expand capacity
Smart Factory
Yesterday
M&A
Formlabs acquires Micronics for undisclosed sum to develop next-generation SLS printers
Additive Manufacturing
Yesterday
Partnerships
Anthropic partners with Amazon Bedrock to fine-tune Claude 3 Haiku model
Foundation Models
Yesterday
Funding
Lemon.markets raises EUR 12 million in funding to support growth
FinTech Infrastructure
Yesterday
M&A
Kipu Quantum acquires PlanQK platform to enhance accessibility to quantum computing solutions
Quantum Computing
Yesterday
Partnerships
Quantinuum and STFC Hartree Centre partner to enhance quantum computing accessibility in UK
Quantum Computing
Yesterday
Generative AI Infrastructure

Generative AI Infrastructure

Jul 10, 2024

Google DeepMind introduces JEST AI training method; claimed to yield 13x improvement in performance

Product updates

  • Google DeepMind has published research on JEST (joint example selection), a new AI training method to reduce computing costs and energy consumption. The researchers claim that JEST allows a 13x improvement in performance and a 10x improvement in power efficiency compared to other methods.

  • JEST differs from traditional AI model training techniques by focusing on entire batches of data rather than individual data points. It creates a smaller AI model to grade data quality from high-quality sources, ranking the batches by quality; it then compares this grading with a larger, lower-quality set and then trains a large model using the most suitable batches identified by the smaller model.

  • Analyst QuickTake: If proven to be a robust method for LLM training, DeepMind will likely look to integrate the training method for future iterations of its “AlphaFold 3” foundation model (announced in May 2024 ), geared toward molecular structure prediction. 

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo
arrow
menuarrow

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.