A Summary of this Article:
- The article examines the competition between OpenAI and Google in the emerging field of generative AI technology.
- It highlights the rise of open-source models that pose a challenge to proprietary models developed by these tech giants.
- The article explores the potential dominance of infrastructure vendors and the uncertainty surrounding the long-term competitive defensibility of app companies and foundation model providers in the generative AI landscape.
***
This month there was a widely read article by Dylan Patel and Afzal Ahmad – titled:
The article, published on a Discord service and at Semianalysis, is supposedly a leaked document from Google. It’s not clear if that’s true but it has been widely discussed. And, as “digital meets moats” is my area, I thought I would give my take.
The basic argument of the article is:
- Google and OpenAI have been in a very expensive arms race to build and rapidly advance large language models (LLMs). That’s GPT and Bard.
- But in the past 3-4 weeks, a series of small LLM projects, powered by open source, have been announced that appear to replicate much of the capabilities of these giant LLMs.
- The authors believe that open source LLMs will end beating proprietary LLMs (such as OpenAI’s GPT and Google’s Bard).
There’s an interesting strategy question at the center of generative AI.
- Who, if anyone, is going to capture the lion’s share of the value?
- What will be the dominant business model?
According to this article, it won’t be the big proprietary LLMs. I’ll summarize their argument. They I’ll give you the current position of a16z. And then I’ll give you my prediction (which is different).
Leakers: Google and OpenAI Don’t Have Moats
From the mentioned article:
“We aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.”
“I’m talking, of course, about open source. Plainly put, they are lapping us…While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”
The open-source movement for LLMs resulted from a leaked foundation model from Facebook (called LLaMA). This happened just weeks ago, and independent developers have been making astonishing progress using it. According to the authors, “the barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”
And this is exactly what has already happened in open-source image generation. Stable diffusion emerged as an open-source alternative to OpenAI’s propriety DALL-E. The authors note that many are “calling this the “Stable Diffusion moment” for LLMs.”
Based on all this they conclude that “large models aren’t more capable in the long run if we can iterate faster on small models”. So directly competing with Open Source is a losing proposition idea.
Specifically, they say:
- “We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.”
- “People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.”
- “Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.”
Now, I disagree with the main point and conclusion of this article. I think the authors are deeply technical in their expertise, but not expert in how you build and maintain competitive strength. I’ll give you my take shortly. But first, let me point to Martin Casado at a16z and his take on this same question.
a16z: It’s Still the Early Stages of a New Generative AI Technology Stack
I am a fan of Martin Casado at a16z. He does great analysis, and he has access to tons of early-stage companies and their data (which I am jealous of). He (and others) recently published the following article:
In it, he argues that a new generative AI technology stack is emerging. And, in its early days, it looks like this:
That’s pretty good. And it makes a couple of nice distinctions in the new technology architecture:
- There are foundation models. Both proprietary and open source, which the Google article was mostly about. These include LLMs and image generation – and are evolving very rapidly in their capabilities.
- There are tons of apps, both end-to-end and built on others’ foundation models. They are showing up as features in established programs. They are showing up in browsers and as mobile apps. This is a pretty chaotic space right now.
- There is the infrastructure and tool set everyone is using for all of this. Think cloud services and lots of Nvidia chips.
The authors ask the question “Where in this market will value accrue?”.
That is basically asking where economic value is going to be created and captured. Which means you should break it down into:
- Growth and/or market size
- Attractive unit economics
- Competitive defensibility
You put those together and you get economic value. It is similar to the framework used by Hamilton Helmer in 7 Powers.
Martins says he’s observed the following in the companies he deals with in Silicon Valley:
- “Infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack.”
- Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins.”
- “Most model providers, though responsible for the very existence of this market, haven’t yet achieved large commercial scale.”
He says: “So far, we’ve had a hard time finding structural defensibility anywhere in the stack, outside of traditional moats for incumbents.”
So, outside of infrastructure vendors, they are saying they don’t know yet. And for foundation models (both LLMs and image generation models), their points are interesting.
“The revenue associated with these companies is still relatively small compared to the usage and buzz. In image generation, Stable Diffusion has seen explosive community growth, supported by an ecosystem of user interfaces, hosted offerings, and fine-tuning methods. But Stability gives their major checkpoints away for free as a core tenet of their business. In natural language models, OpenAI dominates with GPT-3/3.5 and ChatGPT. But relatively few killer apps built on OpenAI exist so far, and prices have already dropped once.”
Like the Google leakers, they also mention open source as a countervailing force against proprietary foundation models.
“Models released as open source can be hosted by anyone, including outside companies that don’t bear the costs associated with large-scale model training (up to tens or hundreds of millions of dollars). And it’s not clear if any closed-source models can maintain their edge indefinitely. For example, we’re starting to see LLMs built by companies like Anthropic, Cohere, and Character.ai come closer to OpenAI levels of performance, trained on similar datasets (i.e., the internet) and with similar model architectures. The example of Stable Diffusion suggests that if open-source models reach a sufficient level of performance and community support, then proprietary alternatives may find it hard to compete.”
“There don’t appear, today, to be any systemic moats in generative AI. As a first-order approximation, applications lack strong product differentiation because they use similar models; models face unclear long-term differentiation because they are trained on similar datasets with similar architectures; cloud providers lack deep technical differentiation because they run the same GPUs; and even the hardware companies manufacture their chips at the same fabs.”
“There are, of course, the standard moats: scale moats (“I have or can raise more money than you!”), supply-chain moats (“I have the GPUs, you don’t!”), ecosystem moats (“Everyone uses my software already!”), algorithmic moats (“We’re more clever than you!”), distribution moats (“I already have a sales team and more customers than you!”) and data pipeline moats (“I’ve crawled more of the internet than you!”). But none of these moats tend to be durable over the long term. And it’s too early to tell if strong, direct network effects are taking hold in any layer of the stack.”
Concluding they say: “Based on the available data, it’s just not clear if there will be a long-term, winner-take-all dynamic in generative AI.”
***
And that is really the key question in all this. What is the long-term competitive defensibility for the app companies and foundation model providers?
That’s the question I am trying to answer. Which I’ll do in Part 2. 😊
Cheers, jeff
—–
Related articles:
- AutoGPT and Other Tech I Am Super Excited About (Tech Strategy – Podcast 162)
- AutoGPT: The Rise of Digital Agents and Non-Human Platforms & Business Models (Tech Strategy – Podcast 163)
- The Winners and Losers in ChatGPT (Tech Strategy – Daily Article)
- Why ChatGPT and Generative AI Are a Mortal Threat to Disney, Netflix and Most Hollywood Studios (Tech Strategy – Podcast 150)
From the Concept Library, concepts for this article are:
- GPT and Generative AI
From the Company Library, companies for this article are:
- OpenAI / GPT / DALL-E
- Google / Bard
Photo by Andrew Neel on Unsplash
——–
I write, speak and consult about how to win (and not lose) in digital strategy and transformation.
I am the founder of TechMoat Consulting, a boutique consulting firm that helps retailers, brands, and technology companies exploit digital change to grow faster, innovate better and build digital moats. Get in touch here.
My book series Moats and Marathons is one-of-a-kind framework for building and measuring competitive advantages in digital businesses.
This content (articles, podcasts, website info) is not investment, legal or tax advice. The information and opinions from me and any guests may be incorrect. The numbers and information may be wrong. The views expressed may no longer be relevant or accurate. This is not investment advice. Investing is risky. Do your own research.