Unintended consequences: U.S. election results herald reckless AI development
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
While the 2024 U.S. election focused on traditional issues like the economy and immigration, its quiet impact on AI policy could prove even more transformative. Without a single debate question or major campaign promise about AI, voters inadvertently tipped the scales in favor of accelerationists — those who advocate for rapid AI development with minimal regulatory hurdles. The implications of this acceleration are profound, heralding a new era of AI policy that prioritizes innovation over caution and signals a decisive shift in the debate between AI’s potential risks and rewards.
The pro-business stance of President-elect Donald Trump leads many to assume that his administration will favor those developing and marketing AI and other advanced technologies. His party platform has little to say about AI. However, it does emphasize a policy approach focused on repealing AI regulations, particularly targeting what they described as “radical left-wing ideas” within existing executive orders of the outgoing administration. In contrast, the platform supported AI development aimed at fostering free speech and “human flourishing,” calling for policies that enable innovation in AI while opposing measures perceived to hinder technological progress.
Early indications based on appointments to leading government positions underscore this direction. However, there is a larger story unfolding: The resolution of the intense debate over AI’s future.
An intense debate
Ever since ChatGPT appeared in November 2022, there has been a raging debate between those in the AI field who want to accelerate AI development and those who want to decelerate.
Famously, in March 2023 the latter group proposed a six-month AI pause in development of the most advanced systems, warning in an open letter that AI tools present “profound risks to society and humanity.” This letter, spearheaded by the Future of Life Institute, was prompted by OpenAI’s release of the GPT-4 large language model (LLM), several months after ChatGPT launched.
The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple Co-founder Steve Wozniak, 2020 Presidential candidate Andrew Yang, podcaster Lex Fridman, and AI pioneers Yoshua Bengio and Stuart Russell. The number of signees of the letter eventually swelled to more than 33,000. Collectively, they became known as “doomers,” a term to capture their concerns about potential existential risks from AI.
Not everyone agreed. OpenAI CEO Sam Altman did not sign. Nor did Bill Gates and many others. Their reasons for not doing so varied, although many voiced concerns about potential harm from AI. This led to many conversations about the potential for AI to run amok, leading to disaster. It became fashionable for many in the AI field to talk about their assessment of the probability of doom, often referred to as an equation: p(doom). Nevertheless, work on AI development did not pause.
For the record, my p(doom) in June 2023 was 5%. That might seem low, but it was not zero. I felt that the major AI labs were sincere in their efforts to stringently test new models prior to release and in providing significant guardrails for their use.
Many observers concerned about AI dangers have rated existential risks higher than 5%, and some have rated much higher. AI safety researcher Roman Yampolskiy rated the probability of AI ending humanity at over 99%. That said, a study released early this year, well before the election and representing the views of more than 2,700 AI researchers, showed that “the median prediction for extremely bad outcomes, such as human extinction, was 5%.” Would you board a plane if there were a 5% chance it might crash? This is the dilemma AI researchers and policymakers face.
Must go faster
Others have been openly dismissive of worries about AI, pointing instead to what they perceived as the huge upside of the technology. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of “The Master Algorithm”). They argued, instead, that AI is part of the solution. As put forward by Ng, there are indeed existential dangers, such as climate change and future pandemics, and AI can be part of how these are addressed and mitigated.
Ng argued that AI development should not be paused, but should instead go faster. This utopian view of technology has been echoed by others who are collectively known as “effective accelerationists” or “e/acc” for short. They argue that technology — and especially AI — is not the problem, but the solution to most, if not all, of the world’s issues. Startup accelerator Y Combinator CEO Garry Tan, along with other prominent Silicon Valley leaders, included the term “e/acc” in their usernames on X to show alignment to the vision. Reporter Kevin Roose at the New York Times captured the essence of these accelerationists by saying they have an “all-gas, no-brakes approach.”
A Substack newsletter from a couple years ago described the principles underlying effective accelerationism. Here is the summation they offer at the end of the article, plus a comment from OpenAI CEO Sam Altman.
AI acceleration ahead
The 2024 election outcome may be seen as a turning point, putting the accelerationist vision in a position to shape U.S. AI policy for the next several years. For example, the President-elect recently appointed technology entrepreneur and venture capitalist David Sacks as “AI czar.”
Sacks, a vocal critic of AI regulation and a proponent of market-driven innovation, brings his experience as a technology investor to this role. He is one of the leading voices in the AI industry, and much of what he has said about AI aligns with the accelerationist viewpoints expressed by the incoming party platform.
In response to the AI executive order from the Biden administration in 2023, Sacks tweeted: “The U.S. political and fiscal situation is hopelessly broken, but we have one unparalleled asset as a country: Cutting-edge innovation in AI driven by a completely free and unregulated market for software development. That just ended.” While the amount of influence Sacks will have on AI policy remains to be seen, his appointment signals a shift toward policies favoring industry self-regulation and rapid innovation.
Elections have consequences
I doubt most of the voting public gave much thought to AI policy implications when casting their votes. Nevertheless, in a very tangible way, the accelerationists have won as a consequence of the election, potentially sidelining those advocating for a more cautious approach by the federal government to mitigate AI’s long-term risks.
As accelerationists chart the path forward, the stakes could not be higher. Whether this era ushers in unparalleled progress or unintended catastrophe remains to be seen. As AI development accelerates, the need for informed public discourse and vigilant oversight becomes ever more paramount. How we navigate this era will define not only technological progress but also our collective future.
As a counterbalance to a lack of action at the federal level, it is possible that one or more states will adopt various regulations, which has already happened to some extent in California and Colorado. For instance, California’s AI safety bills focus on transparency requirements, while Colorado addresses AI discrimination in hiring practices, offering models for state-level governance. Now, all eyes will be on the voluntary testing and self-imposed guardrails at Anthropic, Google, OpenAI and other AI model developers.
In summary, the accelerationist victory means less restrictions on AI innovation. This increased speed may indeed lead to faster innovation, but also raises the risk of unintended consequences. I’m now revising my p(doom) to 10%. What is yours?
Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers