Datasaur, a data labeling platform, unveiled a new feature that enables users to label data and train their own customized ChatGPT model. This allows both technical and non-technical individuals to evaluate and rank the responses of the language model.
The training feature is termed “Evaluation and Ranking.” Evaluation enables human annotators to assess the quality of the LLM's outputs and determine if the responses meet specific criteria for quality, while Ranking streamlines the reinforcement learning process by incorporating human feedback.
In addition, the platform introduces a reviewer mode that lets data scientists assign multiple annotators, reducing subjective biases. This mode helps in identifying and resolving discrepancies among annotators for specific questions, enabling data scientists to make the final decision.
Formed in 2019, Datasaur provides a data labeling platform designed to manage the entire data labeling workflow for natural language processing (NLP) applications.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.