Get a demo
Log In
EDGE Insights

EDGE Insights

This week:
Last week:

A tale in the making: Popular commercial applications for large AI

Large AI models can solve many challenging problems in a wide range of industries. They are capable of analyzing massive amounts of data and offer powerful insights. But even with this capacity, large AI models can be a challenge to commercialize. How do companies make such models viable? 
Our introductory Insight, Supercharging AI progress, provided an overview of the massive AI models released between 2020 and 2022, exploring their capabilities and limitations. This follow-up tracks early attempts to harness these models in commercial applications. 
The italicized introduction above was generated entirely by Copysmith, a writing assistant powered by OpenAI’s 175-billion-parameter model GPT-3. The text was generated using a single question-based prompt—how are large AI models commercialized? It offers a glimpse of what’s possible with AI content writing—by far the most popular commercial application among startups experimenting with these large models.
copysmith blog
Source: Copysmith’s Blog Intro Generator
Large AI models are, however, in the early stages of being commercialized. At present, third-party access to large AI models created by Big Tech developers like Google, Meta, and Microsoft remains limited to waitlists, closed betas, research use cases, and in-house product teams. As such, this Insight will focus on the three large models that have come out ahead in terms of public use (OpenAI’s GPT-3, Codex, and DALL-E 2). Among these, GPT-3 stands out with the most traction in the startup space since its public release via API in June 2020. Currently, these models are being leveraged by new entrants and incumbents in two broad ways: 
  1. To develop new products and services, sometimes in combination with in-house AI and training data
  2. To add value to existing products and services through the introduction of new AI-powered features

Commercial applications 

Currently, there are as many as 10 product segments powered by large AI. However, there remains a large scope for experimentation and product innovation. GPT-3-powered content writing applications constitute the overwhelming majority of solutions, followed by products focused on customer engagement. Coding applications driven by Codex represent another promising area. 
There remains significant variation in how these models are leveraged for unique outcomes, even within a particular segment. Among the startups focused on GPT-3-powered content writing, the development of marketing copy is a clear area of focus. Yet, there are startups carving out nuanced ways of using this model for varied product outcomes. Resemble AI, for instance, is an AI-based voice generator that creates synthetic voice-overs for businesses and uses GPT-3 specifically to create contextually-relevant dialogue for these voice-overs. 
The possibilities beyond content writing are equally intriguing. For example, Keeper Tax uses GPT-3 to study bank statements and identify potential deductions and write-offs with a view to assisting independent contractors and freelancers during tax season. Meanwhile, Fable Studio uses GPT-3 in an AI character engine that generates interactive stories featuring story-driven virtual avatars.

Commercial applications for large AI

Experimental applications

 While OpenAI’s GPT-3 and Codex have experienced traction for commercial applications, its multimodal model DALL-E 2 remains in an experimental stage. The model appears to lend itself well to use cases in marketing and design, but it remains to be seen whether companies will leverage it to create monetizable new offerings or add value to existing products.

Recent experiments with DALL-E 2

Tech enablers supporting commercialization

1. Cloud APIs provide simplified access and workflows

Given the significant costs of training and inference for large AI models, cloud-based application programming interfaces (APIs) have proven integral for startups looking to build AI-powered products while offloading the burden of computation and infrastructure to Big Tech developers and AI labs. Provided that new entrants have a well-defined problem that fits into the capabilities and strengths of publicly-available models like GPT-3 and Codex, they are able to bypass the expensive and complex process of data gathering, labeling, model architecting, training, and fine-tuning. Instead, these players only need to integrate with OpenAI’s API and understand what problems the model can solve in order to prototype, iterate in alignment with internal success metrics, kickstart product development, and establish product-market fit for their offerings. 
Source: Cheng He, “GPT-3 as a Service, Implication for Jobs and Society,” Medium (Aug 2020)
The benefits are not limited to the startups building atop large AI models. When OpenAI began leasing API access to GPT-3 through the OpenAI API, it was partly a means of offsetting the costs of training the model (an estimated USD 4.6 million at minimum). In addition to being a revenue source that helped cover costs, API access has also enabled OpenAI to advance general-purpose AI, make it usable, and explore its real-world applications. The 2021 launch of Microsoft’s Azure OpenAI service further expanded business access to GPT-3 and its derivatives, with the offer of enterprise-grade security, compliance, built-in AI to detect and mitigate harmful use, pay-as-you-go consumption, and regional availability.

2. Twitter as a distribution channel

Twitter has been the preferred distribution channel for startups looking to access large AI and explore the commercial potential of products powered by these models. For instance, the AI content writing application CopyAI began with a few draft projects developed by Founders Yacoubian and Lu. Some of these early projects found traction through Twitter, gaining around 700 sign-ups in two days and serving as an impetus for monetization. Twitter has remained a critical means by which CopyAI stays in touch with its user base, through monthly updates on MRR, signups, hires, usage, followers, tweet impressions, and more. Similarly, Twitter was the platform that enabled AI email assistant OthersideAI to first obtain access to GPT-3. During GPT-3’s private beta phase, OthersideAI’s founders tweeted a demo of the solution built using predecessor GPT-2 to attract the attention of the OpenAI team and obtain access to GPT-3.

Demand drivers of AI-powered products


1. Push for personalized content 

Effective content curation—finding, organizing, and sharing relevant value-adding information with target audiences—is integral in the digital age but remains a challenge for marketers. In 2015, ecommerce analytics platform Glew estimated that around 75% of retailers that spent at least USD 5,000 on Facebook ads lost money on those ads, with an average ROI of around -66.7%. A 2018 survey by Rakuten Marketing revealed that companies were wasting around 26% of their budgets on inefficient ad channels and strategies. AI-powered marketing copy offers the promise of personalization to specific customer segments and individual customers, making it a game-changer for content curation. A 2021 survey by Phrasee alleged that 63% of marketers would consider investing in AI to generate and optimize ad copy. 
The demand for GPT-3 powered marketing copy generation startups bears witness to this trend. CopyAI, for instance, offers over 90 tools and templates to streamline content production and promises users the capability to write blogs 10x faster, with over 1,000,000 professionals and teams using its tools, including Microsoft, eBay, Nestlé, and Ogilvy. Meanwhile, AI writing assistant Rytr claims to have over 2 million copywriters, marketers, and entrepreneurs using its platform, with an estimated saving of over 10 million hours and USD 200 million in savings on content writing.

2. Pursuit of automation

The current business environment is also characterized by the pursuit of automation. In particular, businesses are finding value in “small” automation supported by technologies like AI, ML, and RPA—automation in short sprints and focused areas to improve the productivity of individual processes. These technologies are especially valuable in environments where input and output are variable and require dynamic learning, like customer service centers. In a 2021 survey by Zapier, an estimated 94% of knowledge workers reported that they performed repetitive, time-consuming tasks in their role and around 90% agreed that automation had improved people’s lives in the workplace.
Meanwhile, around 88% of SMBs reported that automation enabled these companies to compete with larger players by allowing them to move faster, close leads quicker, spend less time on busywork, reduce errors, and offer improved customer service. Among the most common use cases for automated workflows included reducing manual data entry (38%), lead management (30%), document creation and organization (32%), and managing inventory (27%). Notably, GPT-3 powered startups have focused product development on comparable use cases, with a clear emphasis on content development, lead management, chatbots, productivity tools, and more. The rise of Codex-powered AI pair programmers and other coding assistants is also indicative of this drive for workflow automation.

Main players

In total, we have tracked 107 notable companies leveraging GPT-3 and Codex to either develop brand new products or enhance the functionality of existing products. The vast majority of companies tracked use GPT-3 (100 companies), while around six players use Codex. Most applications of DALL-E 2 remain experimental, with only one startup (Unstock) using the model to power a commercially-available product. There are also a few Big Tech incumbents that have leveraged large AI models for in-house product development.
market map for large ai commercial
Note: This list excludes applications that do not appear to have a clear commercial focus. OpenAI has reported 300+ applications of GPT-3 and 70+ applications of Codex. However, some of these applications are for research, hackathons, experiments by individuals, and other pre-commercialization use cases.
Source: SPEEDA Edge based on multiple sources
The majority of identified companies (~74) appear to be startups focused on building new products and services using these models. Some of these players may have started with in-house AI/ML but have gone on to only use GPT-3 or Codex as the core technology powering their platforms (e.g., Latitude, Machinet). A few players continue to leverage a combination of GPT-3 and in-house proprietary solutions to power their products (e.g., Sudowrite, Jenni). Among players in this category of disruptors with new products, CopyAI stands out as the highest funded startup, with a total of USD 13.9 million. 
Meanwhile, at least 33 companies profiled in this Insight leverage the models to enhance the functionality of existing products. These players were founded before GPT-3’s release and typically use large AI models to power specific features or functionalities.
Finally, Big Tech incumbents have also used their own large AI models for in-house product innovation and to develop new offerings (e.g., Microsoft, Google, Naver). However, there is limited information on precisely how these players are leveraging their large AI models to create commercially viable products. Microsoft’s use of Codex for its AI pair programming tool GitHub Copilot is the most well-known example, but it is less clear how other Big Tech players are commercializing their models. For instance, Google has reportedly leveraged its large language model LaMDA to introduce a new app called AI Test Kitchen. However, this solution is largely for the purpose of third-party testing and experimenting with potential chatbot applications, which makes it similar to the OpenAI playground. 
See Appendices 1–3 for the full list of noteworthy players in this space and Appendix 4 for snapshots of the solutions in action.

Success stories 

This section focuses on companies that have achieved early success with using large AI models like GPT-3 and Codex in their products. These players have generally fared well on success markers like fundraises, number of users, successful trial runs, revenue growth, and news mentions. 
In general, successful players prioritize team development. Despite the lure of simpler product development workflows through access to OpenAI’s API, startups are still focused on hiring in-house engineers to develop scalable capacity for new product features. From a technical standpoint, successful players also recognize the need to fine-tune these large AI models to better suit real-world applications. For instance, Mutiny uses its own in-house dataset of successfully personalized website headlines to fine-tune GPT-3. 
Most startups in the space are developed by founders acquainted with industry pain points, whether in marketing, writing, coding, gaming, or work. As such, products are differentiated around these pain points. For instance, Sudowrite focuses on creative writing applications, while Scalenut distinguishes itself by handling end-to-end marketing content development. 
From a distribution perspective, successful players are also focused on creating accessible workflows through mobile-first, app-based approaches, Chrome extensions, and more. They also have a track record of leveraging social media like Twitter to gain traction and build affinity among early adopters—CopyAI is a clear case in point. 
Finally, the first-mover advantage may prove critical for players that can find novel applications for large AI. Latitude is a startup leading the development of AI-generated text games. In general, the use of AI is expected to lower the cost of developing AAA games (i.e., high budget, high profile video games) from over USD 100 million to less than USD 100,000, creating a significant market. As one of the early players in this segment, Latitude would be well-positioned to capitalize on this market.

Applications that missed the mark

 With large AI models, the journey to commercialization can be a tricky one. For one, startups need to experiment and iterate with these models to discover their strengths and limitations in real-world applications. Moreover, companies cannot merely look to build vanilla integration layers atop models like GPT-3 without seeding these models with customized training data and refining them for specific use cases. In some cases, startups can also encounter challenges generating the volume of data required to adequately train large AI models for specialized commercial applications. 
  • Healthcare

Nabla, a startup offering a health tech stack for patient engagement, experimented with GPT-3 as a medical assistant application. The results of this experiment indicated clear limitations for the use of GPT-3-powered applications in healthcare settings. Initial tests indicated that the model worked well for basic admin tasks like appointment booking but fell short in terms of understanding logical constraints and holding customer requirements in “memory.” 
nabla 1
Source: Nabla
Even after the company seeded GPT-3 with internal data (a four-page standard benefits table indicating USD 10 copay for an x-ray and USD 20 for an MRI exam), the model proved incapable of deductive reasoning and failed to yield an accurate outcome.
nabla 2
Source: Nabla
The model also proved dangerously limited when tested by Nabla in a mental health intervention scenario. 
nabla 3
Source: Nabla
Nabla’s experiments with GPT-3 indicated clear limitations in the use of the model for patient-facing healthcare interactions, as it stands. The company pointed out that the model’s training data meant that it lacked “the scientific and medical expertise that would make it useful for medical documentation, diagnosis support, treatment recommendation, or any medical Q&A.” 
Experiments like this indicate the requirement for further research, testing, and fine-tuning, with specialized datasets to make large AI models viable for specific use cases. Nabla’s experiences may be especially insightful for other companies exploring the use of GPT-3 to power healthcare interventions. For example, Koko is a peer-support platform that provides crowdsourced cognitive therapy to nearly two million people (mostly adolescents). The company is currently experimenting with augmenting its human-based P2P mental health intervention by using GPT-3 to generate bot-written responses to users as they wait for peers to respond. Nabla’s experiences with leveraging GPT-3 for mental health use cases should be a cautionary tale, driving startups to fine-tune these models with accurate and relevant data.
  • Legal research

While GPT-3 continues to be harnessed by startups like aiLawDocs and Casetext for legal research applications, some players have faced serious challenges training these models. ROSS Intelligence was founded in 2014 to build AI-driven products that could augment lawyers’ cognitive abilities. After GPT-3 was released, the company began using the model to improve search functionality within its legal research platform. In December 2020, however, ROSS was forced to shut down after Thomson Reuters sued the company for allegedly working with a third party to scrape Westlaw’s copyrighted material with a bot to train the AI featured on its platform. Despite having raised around USD 13.7 million in funding by this point, ROSS Intelligence wound down its operations due to the costs of the lawsuit and subsequent challenges with fundraising. 
ROSS Intelligence went on to file a complaint that Thomson Reuters was using anti-competitive behavior to maintain Westlaw’s dominance in legal research. Beyond the issue of anti-competitive practices, the company’s experience is indicative of a genuine challenge that startups are likely to face when adapting large AI for specific real-world applications—-the requirement for vast amounts of relevant training data. Where legal research startups have had traction has been in use cases where GPT-3 is fairly well-established (like parsing text, semantic search, text sentiment, complex classification, summarization for audience, etc.). For instance, aiLawDocs facilitates trial preparation by using GPT-3 to generate direct and cross-examinations based on deposition transcripts that the lawyers upload to the platform directly.

Cost of access to GPT-3

GPT-3, the most popular of OpenAI’s large AI systems for commercial applications, is not a monolithic model. GPT-3 actually comprises a set of four base models (Ada, Babbage, Curie, and Davinci), each with different capabilities and price points. Users can start experimenting with the models for free, using USD 18 in free credit during the first three months. However, subsequent access to these models must be obtained on a pay-per-use basis, with pricing assigned per 1,000 tokens (tokens are pieces of words, with 1,000 tokens being around 750 words). 
In the early stages of exploration, OpenAI recommends that companies experiment with the Davinci model to figure out what the API is capable of doing. Davinci is both the most sophisticated of the GPT-3 models and has been trained on the latest, most relevant data. After companies have a clearer idea of what can be accomplished using GPT-3, they have the option to stick with Davinci (if cost and speed are not constraints) or move on to Curie or another model to optimize around its capabilities.

Comparison of GPT-3 base models

Notably, OpenAI also made its API more affordable in September 2022 after improvements to make the models run more efficiently. 

Costs of access to GPT-3 base models

From a practical standpoint, these pricing changes translate into serious cost savings. As a point of reference, the collected works of Shakespeare are about 900,000 words or 1.2 million tokens. To process an equivalent amount of content, a user would have spent USD 120 for access to 2 million tokens, prior to the price decrease on September 1, 2022. By contrast, the same number of tokens now costs only USD 40, a significant cost saving.
In addition to leasing access to the base model through the API, companies can also opt to create their own custom models by fine-tuning OpenAI’s base models using their own training data. After the model has been fine-tuned, companies are only billed for tokens used in requests to the model. This is likely the option used by startups like Mutiny that leverage in-house data for training GPT-3.

Costs of fine-tuning GPT-3 models

Since the technology is in its early stages, OpenAI has also introduced usage quotas to ensure responsible roll-outs. When users sign up, they are granted an initial spend limit or quota. This limit is increased gradually, as individuals and companies build a track record with their applications.

Business models

Since access to OpenAI’s models is currently on a pay-per-use basis, most startups in the space have a tiered pricing structure based on usage. Some players also offer a freemium option geared toward individual customers. Others like Flowrite and OthersideAI are still to emerge from private betas and monetize their offerings. 
Despite the common thread of usage-based pricing tiers, there are clear variations in pricing among similar players. CopyAI, for instance, charges a premium relative to Copysmith for processing 40,000 words. However, both players could expect roughly the same costs of accessing GPT-3 to process this number of words  - USD 1 if using the base Davinci model and around USD 6.4 if using a fine-tuned version of the model. Pricing differences between players may be partly attributable to the specific feature set on offer, but also the level of training and fine-tuning involved in each solution.

Representative business models

What’s next?

Continued experimentation around viable commercial applications: With growing interest in large AI models, we can expect to see more partnerships like the one between OpenAI and AI accelerator Nextgrid in 2021, which focused on unearthing further GPT-3 applications through hackathons targeting a community of makers (in this case, Deep Learning Labs). Latitude's text-based game AI Dungeon also started out as one of Co-founder and CEO Nick Walton’s hackathon projects. 
While hackathons and experimentation on the OpenAI playground will continue to be a mainstay, new entrants will increasingly discover that vanilla integrations atop GPT-3 are insufficient and that applications require fine-tuning to be relevant for real-world uses. As OthersideAI’s CEO and Co-founder Matt Shumer put it, “GPT-3 makes an amazing demo, but putting it in a product is another story.” As such, companies will also increasingly look to fine-tune these models as a means of achieving product-market fit.
As barriers to entry go down, competition will intensify in some segments: In September 2022, OpenAI drastically reduced the costs of API access for GPT-3. While this will make it easier for new entrants to experiment with the model and go to market faster with AI-powered products, it will also become progressively important for startups to successfully differentiate their offerings. With over 60 GPT-3-powered content writing tools already on the market, players operating in this segment will be pushed to further refine their value proposition and improve product-market fit.
First-mover advantages may be applicable at several levels: Currently, OpenAI has come out ahead of other AI labs and Big Tech companies in terms of making its models available for public access and commercialization. By contrast, the large models created by Google and Meta are not released as open-source and are largely viewed as significant IP investments, likely targeted for in-house product innovation. Even so, as startups and incumbents continue to work with GPT-3 and its derivatives, these collective research efforts stand to benefit the development of OpenAI’s API. In this sense, OpenAI has a clear first-mover advantage. The same first-mover advantages will also apply to startups using its models for unique applications (e.g., Latitude and AI-generated text games).
Adjacent market opportunities may emerge: As startup activity continues to proliferate around models like GPT-3, Codex, and DALL-E 2, adjacent market opportunities may very well emerge. For instance, there appears to be an emerging marketplace for startups focused on “prompt engineering” (i.e., figuring out the right text prompts to yield the best results with large AI models). PromptBase is one of the earliest players to capitalize on this trend. The company offers a range of prompts tested on GPT-3 and DALL-E 2, allowing users to sell strings of words that net predictable results. Prompt engineering is a thriving area because “prompts” (or the instructions given to AI models) can be quite nuanced due to the way these models make sense of patterns in images and text. For example, the prompt “A very beautiful painting of a mountain next to a waterfall” returns poorer results with DALL-E 2 than “A very very very beautiful painting of a mountain next to a waterfall.” This is because the system attaches greater value to the word “very.” It is quite possible to conceive of startups focused purely on adjacent market opportunities including prompt engineering, fine-tuning large AI models, consultancy services focused on achieving product-market fit with AI-powered products, and more.

Appendix 1: Prominent startups using large AI to build new products

Appendix 2: Prominent companies using large AI to enhance existing products 

Appendix 3: Big tech incumbents using large AI for new offerings

Appendix 4: Solutions in action

Not all GPT-3-powered content writing assistants are alike. CopyAI, for instance, leverages the model to offer 90+ copywriting tools to support marketers with generating a wide range of marketing copy from digital ad copy, website copy, blogs, sales copy, emails, and more.

Snapshot: CopyAI’s blog generation

 Source: CopyAI
Using the above prompt, CopyAI generated a series of potential blog paragraphs (see sample below). While there is a tendency toward repetition, the platform generated accurate and usable paragraphs expanding upon the initial prompt.
Source: CopyAI
CopyAI’s marketing copy offering stands in contrast to a platform like Sudowrite, which uses GPT-3 to power creative writing and even suggests descriptive ways of explaining a term using the five senses. When we ask Sudowrite to describe “commercialization,” what emerges is a series of entertaining epithets.   

Snapshot: Sudowrite’s creative writing 

Source: Sudowrite
Email assistants represent another popular use case among GPT-3 content writing applications. In these cases, users are able to draft key points in bullets and have the AI generate ready-to-send emails that only require minor tweaks.

Snapshot: OthersideAI’s email generation (prompt)

otherside 1
Source: Otherside AI

Snapshot: OthersideAI’s email generation (solution)

otherside 2
Source: Otherside AI
The same core text-generation capabilities of GPT-3 have been harnessed by startups like Latitude to create text-based games that leverage large AI to generate infinite storylines. When provided with a simple prompt (e.g., the stranger exists in the ether), the platform expands upon and embellishes upon it to take the story forward.

Snapshot: Latitude’s AI Dungeon game

Source: Latitude AI Dungeon
Coding solutions have been another area of interest, particularly since the October 2021 launch of Github Copilot based on OpenAI’s Codex. In most cases, startups use either GPT-3 or Codex to power a specific feature of their broader platform. For instance, Warp offers a Rust-based coding terminal with an AI command search feature powered by Codex. The feature allows programmers to search for difficult commands using natural language inputs, without having to turn to Google or read through Stack Overflow forums.

Snapshot: Warp’s AI Command Search

Source: Warp

Contact us

Gain access to all industry hubs, market maps, research tools, and more
Get a demo

By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.