Why Startups Will Be At The Forefront of GPT-3 Adoption – Four Trends For 2021

Originally published on TechCrunch.com (And you can read more recent thoughts on Foundation Models.)

Startups are leading the way

The introduction of GPT-3 in 2020 was a tipping point for artificial intelligence. In 2021, this technology will power the launch of a thousand new startups and applications. GPT-3 and similar models have brought the power of AI into the hands of those looking to experiment – the results have been extraordinary.

Trained on trillions of words, GPT-3 is a 175-billion parameter transformer model – the third of such models released by OpenAI. GPT-3 is remarkable in its ability to generate human-like text and responses – in some respects, it’s eerie. When prompted by a user with text, GPT-3 can return coherent and topical emails, tweets, trivia, and much more.

Suddenly, authoring emails, customer interactions, social media exchanges, and even news stories can be automated – at least in part. While large companies are pondering the pitfalls and risks of generating text (remember Microsoft’s disastrous Tay bot?), startups have already begun sweeping in with novel applications – and they will continue to lead the charge in transformer-based innovation.

OpenAI researchers first released the paper introducing GPT-3 in May 2020 – and what started out as some nifty use cases on Twitter has quickly become a hotbed of startup activity. Companies have been formed on top of GPT-3, using the model to generate emails and marketing copy, to create an interactive nutrition tracker or chatbot and more. Let OthersideAI take a first pass at writing your emails, or Broca or Snazzy for your ad copy and campaign content, for instance. Other young companies are harnessing the API to accelerate their existing efforts, augmenting their technical teams’ capabilities with the power of 175 billion parameters and quickly bringing otherwise difficult products to market with much greater speed and data than previously possible. With some clever prompt engineering (a combination of an instruction to the model with a sample output to help guide the model), these companies leverage the underlying GPT-3 system to improve or extend an existing application’s capabilities. Sure, a text expander can be a useful tool for shorthand notation – but powered by GPT-3, that shorthand can be transformed into a product that generates contextually-aware emails in your own style of writing.

As early-stage technology investors, we are inspired to see AI broadly, and natural language processing specifically, become more accessible via the next generation of large-scale transformer models like GPT-3. We expect they will unlock new use cases and capabilities we have yet to even contemplate.

It’s worth noting that while impressive, GPT-3 is far from perfect. Access is expensive (especially for the most robust version). Reliability is well-known issue. And, the model is often criticized for generating ridiculous, nonsensical and repetitive statements. Users need to become adept at prompt engineering (a method of training the model by “prompting” with an instruction and sample ideal output), to further train and refine model outputs. And more broadly, threats around fake news, documents and bias and more are real – we as an industry, and OpenAI as an organization, have big questions ahead to address.

So, what will become of GPT-3 in 2021?

What OpenAI – and crucially, the beta testers with access to GPT-3 and other models – are able to accomplish continues to surprise and in many cases, unexpectedly delight us.

Here are our key predictions for GPT-3 in the coming year:

  1. Transformer models will become more accessible: While it’s true that 175 billion is an exceptional number of parameters, we expect a slew of competing models to emerge that will help drive down the cost to access GPT-3 and other models. Researchers at Google Brain announced a 1.6-trillion-parameter language model, a grassroots collection of researchers is working together on GPT-Neo, etc. These alternatives, including GPT-3, will provide users with access to improved output, increased reliability and speed, and likely – more affordable access to large-scale transformer models. As this happens, startups will leverage these models to rapidly and iteratively develop new applications.
  2. Text will become the command line: As new applications on top of transformer models continue to emerge, they will primarily center around text as the core input that translates to a variety of outputs. Historically, we have spent time understanding new languages (spoken and coded) and we think next-generation transformer models will begin to act as a ‘universal translator’ of sorts, putting text at the front and center and bringing the power to build new applications to the ‘codeless’ among us. We think this will empower a whole new generation of creators, with trillions of parameters at their fingertips, in an entirely low code / no code way.
  3. Specialized “models as a service” will emerge for more specific application areas. Large models like GPT-3 that are pre-trained on vast datasets reduce the need for task-specific fine-tuning for general use case. But, this presents opportunity for both “prompt engineering” on top of the general models and more specialized models that are ready to deploy. We are already seeing examples of where prompt engineering, which requires very little user-specific training, on top of GPT-3 is generating valuable and contextually-relevant output. We expect to see more of these types of applications emerge, particularly those built on specialized models such as Hugging Face.
  4. Data will start becoming a differentiator: As models like GPT3 become more widely available, a critical component of differentiation on final product output will be the datasets that “public” models are trained on. Companies that invest in building high quality and proprietary datasets early will establish a strong competitive moat – and those using GPT-3 will be able to focus on finding and curating that dataset.

As evidenced by GPT-3 and emerging competitive models, modern and deeper NLP is a breakthrough technology. The new generation of transformer language models are unlocking use cases by the day and redefining the standards by which we evaluate their capabilities in mere months. Powered by forms of deep learning and open-source model and dataset sharing, natural language processing capabilities continue to accelerate – promising an exciting year ahead for AI startups broadly and emerging NLP companies specifically. Companies and organizations with substantial resources will keep investing and innovating at the “transformer” infrastructure level. And, venture investors are paying attention primarily around the application level.

Authoring text has always exclusively rested under the domain of humans. While we are not suggesting in the slightest that NYT journalists and best-selling authors be replaced — we foresee the authoring of mass communications to be increasingly automated with technology like GPT-3

Related Insights

    ‘Intelligent Apps’: Seattle Area At Forefront Of Next Big Thing
    Julie Sandler on Why Seattle is the Cloud Capital of the World–and a Great Place for Startups
    Angel Investors – Standing Behind Startups

Related Insights

    ‘Intelligent Apps’: Seattle Area At Forefront Of Next Big Thing
    Julie Sandler on Why Seattle is the Cloud Capital of the World–and a Great Place for Startups
    Angel Investors – Standing Behind Startups