OpenAI launched a multi-capacity GenAI model called "GPT-4o" (“o” stands for “omni”). The model is accessible for ChatGPT Plus and Team users, with a planned rollout for Enterprise users.
With support for 50 languages, GPT-4o integrates text and images, offering multimodal functionality, enhanced intelligence, and real-time responsiveness. It improves the user experience in ChatGPT with emotional nuance understanding and enhanced vision capabilities to answer queries related to images or screens.
With this, free users will have capped access to GPT-4 level intelligence, web-integrated responses, and data analysis features, as well as the ability to chat about photos, upload files for assistance, and explore the GPT Store.
The company also announced a new desktop app for macOS with integrated ChatGPT functionality.
Analyst QuickTake: GPT-4o is a large multimodal model that accepts image and text inputs and produces text outputs. It reportedly exhibits human-level performance on various professional and academic benchmarks. Additionally, features that were previously paywalled, such as memory capability, file and photo uploading, and web searches for timely questions, are now available to free users, allowing a wider range of users to benefit from GPT-4o's capabilities and experience its improved performance and features.
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.