Apple has developed ReALM (Reference Resolution as Language Modeling), a new AI system to improve voice assistants' interpretation and response to user commands.
The system leverages large language models to convert the reference resolution, including understanding references to visual elements on a screen into a pure language modeling problem.
Apple claims that the system enables users to interact more effectively with digital assistants about current on-screen information without detailed instructions, benefitting drivers and users with disabilities.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.