10 AI Predictions For 2024
1. Nvidia will dramatically ramp up its efforts to become a cloud provider.
Most organizations do not pay Nvidia directly for GPUs. Rather, they access GPUs via cloud providers like Amazon Web Services, Microsoft Azure and Google Cloud Platform, who in turn buy chips in bulk from Nvidia.
But Amazon, Microsoft and Google—Nvidia’s largest customers—are fast becoming its competitors. Recognizing how much value in AI today accrues to the silicon layer (for evidence of this, look no further than Nvidia’s stock price), the major cloud providers are all investing heavily to develop their own homegrown AI chips, which will compete directly with Nvidia’s GPUs.
With the cloud providers looking to move down the technology stack to the silicon layer in order to capture more value, don’t be surprised to see Nvidia move in the opposite direction: offering its own cloud services and operating its own data centers in order to reduce its traditional reliance on the cloud companies for distribution.
Nvidia has already begun exploring this path, rolling out a new cloud service called DGX Cloud earlier this year. We predict that Nvidia will meaningfully ramp up this strategy next year.
This could entail Nvidia standing up its own data centers (DGX Cloud is currently housed within other cloud providers’ physical infrastructures); it could even entail Nvidia acquiring an upstart cloud provider like CoreWeave, with whom it already partners closely, as a way to vertically integrate. One way or another, expect the relationship between Nvidia and the big cloud providers to get more complicated as we move into 2024.
2. Stability AI will shut down.
It is one of the AI world’s worst-kept secrets: once-high-flying startup Stability AI has been a slow-motion trainwreck for much of 2023.
Stability is hemorrhaging talent. Departures in recent months include the company’s Chief Operating Officer, Chief People Officer, VP Engineering, VP Product, VP Applied Machine Learning, VP Comms, Head of Research, Head of Audio and General Counsel.
The two firms that led Stability’s high-profile $100 million financing round last year, Coatue and Lightspeed, have reportedly both stepped off the company’s board in recent months amid disputes with Stability CEO Emad Mostaque. The company tried and failed earlier this year to raise additional funds at a $4 billion valuation.
Next year, we predict the beleaguered company will buckle under the mounting pressure and shut down altogether.
Following pressure from investors, Stability has reportedly begun looking for an acquiror, but so far has found little interest.
One thing that Stability has going in its favor: the company recently raised $50 million from Intel, a cash infusion that will extend its runway. For Intel’s part, the investment seems to reflect a pressing desire to get a high-profile customer to commit to its new AI chips as it seeks to gain ground against competitor Nvidia.
But Stability has a notoriously high burn rate: at the time of the Intel investment in October, Stability was reportedly spending $8 million a month while bringing in a small fraction of that in revenue. At that rate, the $50 million investment won’t last through the end of 2024.
3. The terms “large language model” and “LLM” will become less common.
In AI today, the phrase “large language model” (and its abbreviation LLM) are frequently used as shorthand for “any advanced AI model.” This is understandable, given that many of the original generative AI models to rise to prominence (e.g., GPT-3) were text-only models.
But as AI model types proliferate and as AI becomes increasingly multimodal, this term will become increasingly imprecise and unhelpful. The emergence of multimodal AI has been one of the defining themes in AI in 2023. Many of today’s leading generative AI models incorporate text, images, 3-D, audio, video, music, physical action and more. They are far more than just language models.
Consider an AI model that has been trained on the amino acid sequences and molecular structures of known proteins in order to generate de novo protein therapeutics. Though its underlying architecture is an extension of models like GPT-3, does it really make sense to call this a large language model?
Or consider foundation models in robotics: large generative models that combine visual and language input with general internet-scale knowledge in order to take actions in the real world, e.g. via a robotic arm. A richer term than “language model” should and will exist for such models. (“Vision-language-action” (VLA) model is one alternative phrase that researchers have used.)
A similar point can be made about the FunSearch model recently published by DeepMind, which the authors themselves refer to as an LLM but which deals in mathematics rather than in natural language.
In 2024, as our models become increasingly multi-dimensional, so too will the terms that we use to describe them.
4. The most advanced closed models will continue to outperform the most advanced open models by a meaningful margin.
One important topic in AI discourse today is the debate around open-source and closed-source AI models. While most cutting-edge AI model developers—OpenAI, Google DeepMind, Anthropic, Cohere, among others—keep their most advanced models proprietary, a handful of companies including Meta and buzzy new startup Mistral have chosen to make their state-of-the-art model weights publicly available.
Today, the highest-performing foundation models (e.g., OpenAI’s GPT-4) are closed-source. But many open-source advocates argue that the performance gap between closed and open models is shrinking and that open models are on track to overtake closed models in performance, perhaps by next year. (This chart made the rounds recently.)
We disagree. We predict that the best closed models will continue to meaningfully outperform the best open models in 2024 (and beyond).
The state of the art in foundation model performance is a fast-moving frontier. Mistral recently boasted that it will open-source a GPT-4-level model sometime in 2024, a claim that has generated excitement in the open source community. But OpenAI released GPT-4 in early 2023. By the time Mistral comes out with this new model, it will likely be more than a year behind the curve. OpenAI may well have released GPT-4.5 or even GPT-5 by then, establishing an entirely new performance frontier. (Rumors have been circulating that GPT-4.5 may even drop before the end of 2023.)
As in many other domains, catching up to the frontier as a fast follower, after another group has defined it, is easier to achieve than establishing a new frontier before anyone else has shown it is possible. For instance, it was considerably riskier, more challenging and more expensive for OpenAI to build GPT-4 using a mixture-of-experts architecture, when this approach had not previously been shown to work at this scale, than it was for Mistral to follow in OpenAI’s footsteps several months later with its own mixture-of-experts model.
There is a basic structural reason to doubt that open models will leapfrog closed models in performance in 2024. The investment required to develop a new model that advances the state of the art is enormous, and will only continue to balloon for every step-change increase in model capabilities. Some industry observers estimate that OpenAI will spend around $2 billion to develop GPT-5.
Meta is a publicly traded company ultimately answerable to its shareholders. The company seems not to expect any direct revenue from its open-source model releases. Llama 2 reportedly cost Meta around $20 million to build; that level of investment may be justifiable, even without any associated revenue boost, given the strategic benefits. But is Meta really going to sink anywhere near $2 billion into the quest to build an AI model that outperforms anything else in existence, just to open-source it without any expectation for a concrete return on investment?
Upstarts like Mistral face a similar conundrum. There is no clear revenue model for open-source foundation models (as Stability AI has learned the hard way). Charging for hosting open-source models, for instance, becomes a race to the bottom on price, as we have seen in recent days with Mistral’s new Mixtral model. So—even if Mistral had access to the billions of dollars needed to build a new model that leapfrogged OpenAI—would it really choose to turn around and give that model away for free?
Our sneaking suspicion is that, as companies like Mistral invest ever greater sums to build ever more powerful AI models, they may end up relaxing their stance on open source and keeping their most advanced models proprietary so that they can charge for them.
(To be clear: this is not an argument against the merits of open-source AI. It is not an argument that open-source AI will not be important in the world of artificial intelligence going forward. On the contrary, we expect open-source models to play a critical role in the proliferation of AI in the years ahead. However: we predict that the most advanced AI systems, those that push forward the frontiers of what is possible in AI, will continue to be proprietary.)
5. A number of Fortune 500 companies will create a new C-suite position: Chief AI Officer.
Artificial intelligence has shot to the top of the priority list for Fortune 500 companies this year, with boards and management teams across industries scrambling to figure out what this powerful new technology means for their businesses.
One tactic that we expect to become more common among large enterprises next year: appointing a “Chief AI Officer” to spearhead the organization’s AI initiatives.
We saw a similar trend play out during the rise of cloud computing a decade ago, with many organizations hiring “Chief Cloud Officers” to help them navigate the strategic implications of the cloud.
This trend will gain further momentum in the corporate world given a parallel trend already underway in government. President Biden’s recent executive order on AI requires every federal government agency to appoint a Chief AI Officer, meaning that over 400 new Chief AI Officers will be hired across the U.S. government in the coming months.
Naming a Chief AI Officer will become a popular way for companies to signal externally that they are serious about AI. Whether these roles will prove valuable over the long term is a different question. (How many Chief Cloud Officers are still around today?)
6. An alternative to the transformer architecture will see meaningful adoption.
Introduced in a seminal 2017 paper out of Google, the transformer architecture is the dominant paradigm in AI technology today. Every major generative AI model and product in existence—ChatGPT, Midjourney, GitHub Copilot and so on—is built using transformers.
But no technology remains dominant forever.
On the edges of the AI research community, a few groups have been hard at work developing novel, next-generation AI architectures that are superior to transformers in different ways.
One key hub of these efforts is Chris Ré’s lab at Stanford. The central theme of Ré and his students’ work has been to build a new model architecture that scales sub-quadratically with sequence length (rather than quadratically, as transformers do). Sub-quadratic scaling would enable AI models that are (1) less computationally intensive and (2) better able to process long sequences compared to transformers. Notable sub-quadratic model architectures out of Ré’s lab in recent years have included S4, Monarch Mixer and Hyena.
The most recent sub-quadratic architecture—and perhaps the most promising yet—is Mamba. Published just last month by two Ré protégés, Mamba has inspired tremendous buzz in the AI research community, with some commentators hailing it as “the end of transformers.”
Other efforts to build alternatives to the transformer architecture include liquid neural networks, developed at MIT, and Sakana AI, a new startup led by one of the co-inventors of the transformer.
Next year, we predict that one or more of these challenger architectures will break through and win real adoption, transitioning from a mere research novelty to a credible alternative AI approach used in production.
To be clear, we do not expect transformers to go away in 2024. They are a deeply entrenched technology on which the world’s most important AI systems are based. But we do predict that 2024 will be the year in which cutting-edge alternatives to the transformer become viable options for real-world AI use cases.
7. Strategic investments from cloud providers into AI startups—and the associated accounting implications—will be challenged by regulators.
A tidal wave of investment capital has flowed from large technology companies into AI startups this year.
Microsoft invested $10 billion into OpenAI in January and then led a $1.3 billion funding round in Inflection in June. This fall, Amazon announced that it would invest up to $4 billion into Anthropic. Not to be outdone, Alphabet announced weeks later that it would invest up to $2 billion into Anthropic. Nvidia, meanwhile, has been perhaps the most prolific AI investor in the world this year, plowing money into dozens of AI startups that use its GPUs including Cohere, Inflection, Hugging Face, Mistral, CoreWeave, Inceptive, AI21 Labs and Imbue.
It is not hard to see that the motivation for making these investments is, at least in part, to secure these high-growth AI startups as long-term compute customers.
Such investments can implicate an important gray area in accounting rules. This may sound like an esoteric topic—but it will have massive implications for the competitive landscape in AI going forward.
Say a cloud vendor invests $100 million into an AI startup based on a guarantee that the startup will turn around and spend that $100 million on the cloud vendor’s services. Conceptually, this is not true arms-length revenue for the cloud vendor; the vendor is, in effect, using the investment to artificially transform its own balance sheet cash into revenue.
These types of deals—often referred to as “round-tripping” (since the money goes out and comes right back in)—have raised eyebrows this year among Silicon Valley leaders like VC Bill Gurley.
The devil is in the details. Not all of the deals mentioned above necessarily represent true instances of round-tripping. It matters, for instance, whether an investment comes with an explicit obligation for the startup to spend the capital on the investor’s products, or simply encourages broad strategic collaboration between the two organizations. The contracts between Microsoft and OpenAI, or between Amazon and Anthropic, are not publicly available, so we cannot say for sure how they are structured.
But at least in some cases, cloud providers may well be recognizing revenue via these investments that they should not be.
These deals have proceeded with little to no regulatory scrutiny to this point. This will change in 2024. Expect the SEC to take a much harder look at round-tripping in AI investments next year—and expect the number and size of such deals to drop dramatically as a result.
Given that cloud providers have been one of the largest sources of capital fueling the generative AI boom to date, this could have a material impact on the overall AI fundraising environment in 2024.
8. The Microsoft/OpenAI relationship will begin to fray.
Microsoft and OpenAI are closely allied. Microsoft has poured over $10 billion into OpenAI to date. OpenAI’s models power key Microsoft products like Bing, GitHub Copilot and Office 365 Copilot. When OpenAI CEO Sam Altman was unexpectedly fired by the board last month, Microsoft CEO Satya Nadella played an instrumental role in getting him reinstated.
Yet Microsoft and OpenAI are distinct organizations, with distinct ambitions and distinct long-term visions for the future of AI. The alliance has so far worked well for both groups, but it is a marriage of convenience. The two organizations are far from perfectly aligned.
Next year, we predict that cracks will begin to appear in the partnership between these two giants. Indeed, hints of future friction have already begun to surface.
As OpenAI looks to aggressively ramp up its enterprise business, it will find itself more and more often competing directly with Microsoft for customers. For its part, Microsoft has plenty of reasons to diversify beyond OpenAI as a supplier of cutting-edge AI models. Microsoft recently announced a deal to partner with OpenAI rival Cohere, for instance. Faced with the exorbitant costs of running OpenAI’s models at scale, Microsoft has also invested in internal AI research efforts on smaller language models like Phi-2.
Bigger picture, as AI becomes ever more powerful, important questions about AI safety, risk, regulation and public accountability will take center stage. The stakes will be high. Given their differing cultures, values and histories, it seems inevitable that the two organizations will diverge in their philosophies and approaches to these issues.
With a $2.7 trillion market capitalization, Microsoft is the second-largest company in the world. Yet the ambitions of OpenAI and its charismatic leader Sam Altman may be even more far-reaching. These two organizations serve each other well today. But don’t expect that to last forever.
9. Some of the hype and herd mentality behavior that shifted from crypto to AI in 2023 will shift back to crypto in 2024.
It is hard to imagine venture capitalists and technology leaders getting excited about anything other than AI right now. But a year is a long time, and VCs’ “convictions” can shift remarkably quickly.
Crypto is a cyclical industry. It is out of fashion right now, but make no mistake, another big bull run will come—as it did in 2021, and before that in 2017, and before that in 2013. In case you haven’t noticed, after starting the year under $17,000, the price of bitcoin has risen sharply in the past few months, from $25,000 in September to over $40,000 today. A major bitcoin upswing may be in the works, and if it is, plenty of crypto activity and hype will ensue.
A number of well-known venture capitalists, entrepreneurs and technologists who today position themselves as “all in” on AI were deeply committed to crypto during the 2021-2022 bull market. If crypto asset prices do come roaring back next year, expect some of them to follow the heat in that direction, just as they followed the heat to AI this year.
(Candidly, it would be a welcome development to see some of the excessive AI hype redirect elsewhere next year.)
10. At least one U.S. court will rule that generative AI models trained on the internet represent a violation of copyright. The issue will begin working its way up to the U.S. Supreme Court.
A significant and underappreciated legal risk looms over the entire field of generative artificial intelligence today: the world’s leading generative AI models have been trained on troves of copyrighted content, a fact that could trigger massive liability and transform the economics of the industry.
Whether it is poetry from GPT-4 or Claude 2, images from DALL-E 3 or Midjourney, or videos from Pika or Runway, generative AI models are able to produce breathtakingly sophisticated output because they have been trained on much of the world’s digital data. For the most part, AI companies have pulled this data off the internet free of charge and used it at will to develop their models.
But do the millions of individuals who actually created all that intellectual property in the first place—the humans who wrote the books, penned the poetry, took the photographs, drew the paintings, filmed the videos—have a say over whether and how it is used by AI practitioners? Do they have a right to some of the value created by the AI models that result?
The answers to these questions will hinge on courts’ interpretation of a key legal concept known as “fair use”. Fair use is a well-developed legal doctrine that has been around for centuries. But its application to the nascent field of generative AI creates complex new theoretical questions without clear answers.
“People in machine learning aren’t necessarily aware of the nuances of fair use and, at the same time, the courts have ruled that certain high-profile real-world examples are not protected fair use, yet those very same examples look like things AI is putting out,” said Stanford researcher Peter Henderson. “There’s uncertainty about how lawsuits will come out in this area.”
How will these questions get resolved? Through individual cases and court rulings.
Applying fair use doctrine to generative AI will be a complex undertaking requiring creative thinking and subjective judgment. Credible arguments and defensible conclusions will exist on both sides of the issue.
Thus, don’t be surprised to see at least one U.S. court next year rule that generative AI models like GPT-4 and Midjourney do represent copyright violations, and that the companies that built them are liable to the owners of the intellectual property on which the models were trained.
This will not resolve the issue. Other U.S. courts, in other jurisdictions, faced with different fact patterns, will in all likelihood reach the opposite conclusion: that generative AI models are protected by the fair use doctrine.
The issue will begin to work its way all the way up to the U.S. Supreme Court, which will eventually provide a conclusive legal resolution. (The path to the nation’s highest court is long and winding; don’t expect a Supreme Court ruling on this issue next year.)
In the meantime, plenty of litigation will ensue, plenty of settlements will be negotiated, and lawyers around the world will be kept busy navigating a patchwork of caselaw. Many billions of dollars will hang in the balance.
See here for our 2023 AI predictions, and see here for our end-of-year retrospective on them.
See here for our 2022 AI predictions, and see here for our end-of-year retrospective on them.
See here for our 2021 AI predictions, and see here for our end-of-year retrospective on them.