Silicon Valley stifled the AI doom movement in 2024


For several years now, technologists have rung alarm bells about the potential for advanced AI systems to cause catastrophic damage to the human race.

But in 2024, those warning calls were drowned out by a practical and prosperous vision of generative AI promoted by the tech industry – a vision that also benefited their wallets.

Those warning of catastrophic AI risk are often called “AI doomers,” though it’s not a name they’re fond of. They’re worried that AI systems will make decisions to kill people, be used by the powerful to oppress the masses, or contribute to the downfall of society in one way or another.

In 2023, it seemed like we were in the beginning of a renaissance era for technology regulation. AI doom and AI safety — a broader subject that can encompass hallucinations, insufficient content moderation, and other ways AI can harm society — went from a niche topic discussed in San Francisco coffee shops to a conversation appearing on MSNBC, CNN, and the front pages of the New York Times.

To sum up the warnings issued in 2023: Elon Musk and more than 1,000 technologists and scientists called for a pause on AI development, asking the world to prepare for the technology’s profound risks. Shortly after, top scientists at OpenAI, Google, and other labs signed an open letter saying the risk of AI causing human extinction should be given more credence. Months later, President Biden signed an AI executive order with a general goal to protect Americans from AI systems. In November 2023, the non-profit board behind the world’s leading AI developer, OpenAI, fired Sam Altman, claiming its CEO had a reputation for lying and couldn’t be trusted with a technology as important as artificial general intelligence, or AGI — once the imagined endpoint of AI, meaning systems that actually show self-awareness. (Although the definition is now shifting to meet the business needs of those talking about it.)

For a moment, it seemed as if the dreams of Silicon Valley entrepreneurs would take a backseat to the overall health of society.

But to those entrepreneurs, the narrative around AI doom was more concerning than the AI models themselves.

In response, a16z cofounder Marc Andreessen published “Why AI will save the world” in June 2023, a 7,000 word essay dismantling the AI doomers’ agenda and presenting a more optimistic vision of how the technology will play out.

SAN FRANCISCO, CA – SEPTEMBER 13: Entrepreneur Marc Andreessen speaks onstage during TechCrunch Disrupt SF 2016 at Pier 48 on September 13, 2016 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)Image Credits:Steve Jennings / Getty Images

“The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it,” said Andreessen in the essay.

In his conclusion, Andreessen gave a convenient solution to our AI fears: move fast and break things – basically the same ideology that has defined every other 21st century technology (and their attendant problems). He argued that Big Tech companies and startups should be allowed to build AI as fast and aggressively as possible, with few to no regulatory barriers. This would ensure AI does not fall into the hands of a few powerful companies or governments, and would allow America to compete effectively with China, he said.

Of course, this would also allow a16z’s many AI startups make a lot more money — and some found his techno-optimism uncouth in an era of extreme income disparity, pandemics, and housing crises.

While Andreessen doesn’t always agree with Big Tech, making money is one area the entire industry can agree on. a16z’s co-founders wrote a letter with Microsoft CEO Satya Nadella this year, essentially asking the government not to regulate the AI industry at all.

Meanwhile, despite their frantic hand-waving in 2023, Musk and other technologists did not stop slow down to focus on safety in 2024 – quite the opposite: AI investment in 2024 outpaced anything we’ve seen before. Altman quickly returned to the helm of OpenAI, and a mass of safety researchers left the outfit in 2024 while ringing alarm bells about its dwindling safety culture.

Biden’s safety-focused AI executive order has largely fallen out of favor this year in Washington, D.C. – the incoming President-elect, Donald Trump, announced plans to repeal Biden’s order, arguing it hinders AI innovation. Andreessen says he’s been advising Trump on AI and technology in recent months, and a longtime venture capitalist at a16z, Sriram Krishnan, is now Trump’s official senior adviser on AI.

Republicans in Washington have several AI-related priorities that outrank AI doom today, according to Dean Ball, an AI-focused research fellow at George Mason University’s Mercatus Center. Those include building out data centers to power AI, using AI in the government and military, competing with China, limiting content moderation from center-left tech companies, and protecting children from AI chatbots.

“I think [the movement to prevent catastrophic AI risk] has lost ground at the federal level. At the state and local level they have also lost the one major fight they had,” said Ball in an interview with TechCrunch. Of course, he’s referring to California’s controversial AI safety bill SB 1047.

Part of the reason AI doom fell out of favor in 2024 was simply because, as AI models became more popular, we also saw how unintelligent they can be. It’s hard to imagine Google Gemini becoming Skynet when it just told you to put glue on your pizza.

But at the same time, 2024 was a year when many AI products seemed to bring concepts from science fiction to life. For the first time this year: OpenAI showed how we could talk with our phones and not through them, and Meta unveiled smart glasses with real-time visual understanding. The ideas underlying catastrophic AI risk largely stem from sci-fi films, and while there’s obviously a limit, the AI era is proving that some ideas from sci-fi may not be fictional forever.

2024’s biggest AI doom fight: SB 1047

GettyImages 1968430987
State Senator Scott Wiener, a Democrat from California, right, during the Bloomberg BNEF Summit in San Francisco, California, US, on Wednesday, Jan. 31, 2024. The summit provides the ideas, insights and connections to formulate successful strategies, capitalize on technological change and shape a cleaner, more competitive future. Photographer: David Paul Morris/Bloomberg via Getty ImagesImage Credits:David Paul Morris/Bloomberg via Getty Images / Getty Images

The AI safety battle of 2024 came to a head with SB 1047, a bill supported by two highly regarded AI researchers: Geoffrey Hinton and Yoshua Benjio. The bill tried to prevent advanced AI systems from causing mass human extinction events and cyberattacks that could cause more damage than 2024’s CrowdStrike outage.

SB 1047 passed through California’s Legislature, making it all the way to Governor Gavin Newsom’s desk, where he called it a bill with “outsized impact.” The bill tried to prevent the kinds of things Musk, Altman, and many other Silicon Valley leaders warned about in 2023 when they signed those open letters on AI.

But Newsom vetoed SB 1047. In the days before his decision, he talked about AI regulation on stage in downtown San Francisco, saying: “I can’t solve for everything. What can we solve for?”

That pretty clearly sums up how many policymakers are thinking about catastrophic AI risk today. It’s just not a problem with a practical solution.

Even so, SB 1047 was flawed beyond its focus on catastrophic AI risk. The bill regulated AI models based on size, in an attempt to only regulate the largest players. However, that didn’t account for new techniques such as test-time compute or the rise of small AI models, which leading AI labs are already pivoting to. Furthermore, the bill was widely considered an assault on open-source AI – and by proxy, the research world – because it would have limited firms like Meta and Mistral from releasing highly customizable frontier AI models.

But according to the bill’s author, state Senator Scott Wiener, Silicon Valley played dirty to sway public opinion about SB 1047. He previously told TechCrunch that venture capitalists from Y Combinator and A16Z engaged in a propaganda campaign against the bill.

Specifically, these groups spread a claim that SB 1047 would send software developers to jail for perjury. Y Combinator asked young founders to sign a letter saying as much in June 2024. Around the same time, Andreessen Horowitz general partner Anjney Midha made a similar claim on a podcast.

The Brookings Institution labeled this as one of many misrepresentations of the bill. SB 1047 did mention tech executives would need to submit reports identifying shortcomings of their AI models, and the bill noted that lying on a government document is perjury. However, the venture capitalists who spread these fears failed to mention that people are rarely charged for perjury, and even more rarely convicted.

YC rejected the idea that they spread misinformation, previously telling TechCrunch that SB 1047 was vague and not as concrete as Senator Wiener made it out to be.

More generally, there was a growing sentiment during the SB 1047 fight that AI doomers were not just anti-technology, but also delusional. Famed investor Vinod Khosla called Wiener clueless about the real dangers of AI in October of this year.

Meta’s chief AI scientist, Yann LeCun, has long opposed the ideas underlying AI doom, but became more outspoken this year.

“The idea that somehow [intelligent] systems will come up with their own goals and take over humanity is just preposterous, it’s ridiculous,” said LeCun at Davos in 2024, noting how we’re very far from developing superintelligent AI systems. “There are lots and lots of ways to build [any technology] in ways that will be dangerous, wrong, kill people, etc… But as long as there is one way to do it right, that’s all we need.”

Meanwhile, policymakers have shifted their attention to a new set of AI safety problems.

The fight ahead in 2025

The policymakers behind SB 1047 have hinted they may come back in 2025 with a modified bill to address long-term AI risks. One of the sponsors behind the bill, Encode, says the national attention SB 1047 drew was a positive signal.

“The AI safety movement made very encouraging progress in 2024, despite the veto of SB 1047,” said Sunny Gandhi, Encode’s Vice President of Political Affairs, in an email to TechCrunch. “We are optimistic that the public’s awareness of long-term AI risks is growing and there is increasing willingness among policymakers to tackle these complex challenges.”

Gandhi says Encode expects “significant efforts” in 2025 to regulate around AI-assisted catastrophic risk, though she did not disclose any specific one.

On the opposite side, a16z general partner Martin Casado is one of the people leading the fight against regulating catastrophic AI risk. In a December op-ed on AI policy, Casado argued that we need more reasonable AI policy moving forward, declaring that “AI appears to be tremendously safe.”

“The first wave of dumb AI policy efforts is largely behind us,” said Casado in a December tweet. “Hopefully we can be smarter going forward.”

Calling AI “tremendously safe” and attempts to regulate it “dumb” is something of an oversimplification. For example, Character.AI – a startup a16z has invested in – is currently being sued and investigated over child safety concerns. In one active lawsuit, a 14-year-old Florida boy killed himself after allegedly confiding his suicidal thoughts to a Character.AI chatbot that he had romantic and sexual chats with. This case, in itself, shows how our society has to prepare for new types of risks around AI that may have sounded ridiculous just a few years ago.

There are more bills floating around that address long-term AI risk – including one just introduced at the federal level by Senator Mitt Romney. But now, it seems AI doomers will be fighting an uphill battle in 2025.



Source link

About The Author

Scroll to Top