This article is more than

1 year old
Sillicon Valley

When Silicon Valley’s AI warriors came to Washington

Author: BRENDAN BORDELON Source: Politico
December 30, 2023 at 07:44
Illustration by @eoinryanart for POLITICO
Illustration by @eoinryanart for POLITICO

Effective altruism is increasingly described as a cult. But as the movement’s billionaire adherents pour money into Washington, its obsession with the AI apocalypse is remaking the capital’s tech policy landscape.

In a city notorious for its cynicism, few things are quite as unsettling as an interest group that truly believes they’re the good guys.

As Washington grapples with the rise of artificial intelligence, a small army of adherents to “effective altruism” has descended on the nation’s capital and is dominating how the White House, Congress and think tanks approach the technology.

The Silicon Valley-based movement is backed by tech billionaires and began as a rationalist approach to solving human suffering. But some observers say it has morphed into a cult obsessed with the coming AI doomsday.

The most ardent advocates of effective altruism, or EA, believe researchers are only months or years away from building an AI superintelligence able to outsmart the world’s collective efforts to control it. Through either its own volition or via terrorists seeking to develop deadly bioweapons, such an AI could wipe out humanity, they say. And some, including noted EA thinker Eliezer Yudkowsky, believe even a nuclear holocaust would be preferable to an unchecked AI future.

If stopping malignant AI required war between nuclear-armed nations, Yudkowsky argues, that would be a price worth paying.

“It’s really kind of a ridiculous idea that, you know, risking starting nuclear war might be better because humans will probably survive that and rebuild — versus AI, which will destroy us to the last person,” said Zach Graves, executive director at the Foundation for American Innovation and a longtime observer of EA’s advance into the nation’s capital.

Most EAs aren’t quite so militant. But to varying degrees and on disparate timelines, nearly all of them believe AI poses an existential threat to the human race.
 

Kamala Harris delivers a policy speech at the U.S. Embassy in London.
Kamala Harris delivered an unmistakable, if vague, critique of the existential-risk approach during a November speech at the United Kingdom’s AI safety summit. | Kin Cheung/AP

 

As scores of tech-funded EAs spread across key policy nodes in Washington, they’re triggering a culture clash — landing in the city’s incremental, detail-oriented culture with a fervor more akin to religious converts than policy professionals.

Regulators in Washington usually dwell in a world of practical disputes, like how AI could promote racial profiling, spread disinformation, undermine copyright or displace workers. But EAs, energized by a uniquely Northern Californian mix of awe and fear at the pace of technology, dwell in an existential realm.

“The EA people stand out as talking about a whole different topic, in a whole different style,” said Robin Hanson, an economist at George Mason University and former effective altruist. “They’re giving pretty abstract arguments about a pretty abstract concern, and they’re ratcheting up the stakes to the max.”

From their newfound perches on Capitol Hill, in federal agencies and at key think tanks, EAs are pressing lawmakers, agency officials and seasoned policy professionals to support sweeping laws that would “align” AI with human goals and values.

Virtually all the policies that EAs and their allies are pushing — new reporting rules for advanced AI modelslicensing requirements for AI firms, restrictions on open-source models, crackdowns on the mixing of AI with biotechnology or even a complete “pause” on “giant” AI experiments — are in furtherance of that goal.

“This shouldn’t be grouped in the same sort of vein as saying, ‘Well, this is just another tech issue. We’ve dealt with tech issues for a really long time, we have time to deal with this.’ Because we really don’t,” said Emilia Javorsky, director of the futures program at the Future of Life Institute — an organization founded by EA luminaries and funded in part by a foundation financed by tech billionaire Elon Musk, who calls EA a “close match” to his philosophy.

“If we don’t start drawing the lines now, the genie’s out of the bottle — and it will be almost impossible to put it back in,” Javorsky warned.

The prophets of the AI apocalypse are boosted by an avalanche of tech dollars, with much of it flowing through Open Philanthropy — a major funder of effective altruist causes, founded and financed by billionaire Facebook co-founder Dustin Moskovitz and his wife Cari Tuna, that has pumped hundreds of millions of dollars into influential think tanks and programs that place staffers in key congressional offices and at federal agencies.
 

Dustin Moskovitz is pictured.
The sheer amount of money being funneled into Washington by groups like Open Philanthropy, founded and financed by Dustin Moskovitz (pictured above) and his wife Cari Tuna, has given the movement significant leverage over the AI and biosecurity debate in Washington. | Patricia De Melo Moreira/AFP via Getty Images

 

“It’s an epic infiltration,” said one biosecurity researcher in Washington, granted anonymity to avoid blowback from EA-linked funders.

EAs are particularly fixated on the possibility that future AI systems could combine with gene synthesis tools and other technologies to create bioweapons that kill billions of people — a phenomenon that’s given more traditional AI and biosecurity researchers a front row seat as Silicon Valley’s hot new philosophy spreads across Washington.

Many of those researchers claim that EA’s billionaire backers — who often possess close personal and financial ties to companies like OpenAI and Anthropic — are trying to distract Washington from examining AI’s real-world impact, including its tendency to promote racial or gender bias, undermine privacy and weaken copyright protections.

They also worry that EA’s tech industry funders are acting in their self-interest, working to wall off leading AI firms from competition by promoting rules that, in the name of “AI safety,” lock down access to the technology.

“Many [EAs] do think that fewer players who are more carefully watched is safer, from their point of view,” said Hanson. “So they are not that eager to reduce concentration in this industry, or the centralization of power in this industry.”

The generally white and privileged backgrounds of EA adherents also has prompted suspicion in Washington, particularly among Black lawmakers concerned about how existing AI systems can harm marginalized communities.

“I don’t mean to create stereotypes of tech bros, but we know that this is not an area that often selects for diversity of America,” Sen. Cory Booker (D-N.J.) told POLITICO in September.

“This idea that we’re going to somehow get to a point where we’re going to be living in a Terminator nightmare — yeah, I’m concerned about those existential things,” Booker said. “But the immediacy of what we’ve already been using — most Americans don’t realize that AI is already out there, from resumé selection to what ads I’m seeing on my phone.”

Despite those concerns, the sheer amount of money being funneled into Washington by Open Philanthropy and other EA-linked groups has given the movement significant leverage over the AI and biosecurity debate in Washington.

“The money is overwhelmingly lopsided,” said Hanson, referring to support for AI-specific policy fellows and staff members.

AI and biosecurity staffers funded by Open Philanthropy are embedded in congressional offices at the forefront of potential AI rules, including all three of the Senate offices tapped by Majority Leader Chuck Schumer to investigate the technology. And the more than half-dozen skeptical AI and biosecurity researchers that spoke with POLITICO say the dense network of Capitol Hill and agency staffers — financed by hundreds of millions of EA dollars — is skewing how policymakers discuss AI safety, which otherwise remains a relatively niche field in Washington.

One AI and biosecurity researcher in Washington said lawmakers and other policy professionals are being pushed toward a focus on existential AI risks by sheer force of repetition.

“It’s more just the object permanence of having that messaging constantly in your face,” said the researcher, who was also granted anonymity to avoid losing funding.
 

 Joe Biden signs an executive on artificial intelligence
After receiving more than $15 million in AI and biosecurity grants from Open Philanthropy this year, the RAND Corp. played a crucial role in drafting President Joe Biden’s October executive order on AI. | Evan Vucci/AP

 

The researcher warned that the sweeping EA influence campaign is causing much of Washington to take as a given that existential AI risks are likely or inevitable — often with little evidence.

“We skipped entirely over the body of risk research that asks, ‘Is there risk?’” the researcher said.

Effective altruism’s newfound pull at influential groups like the RAND Corp. — the venerable policy think tank that, after receiving more than $15 million in AI and biosecurity grants from Open Philanthropy this year, played a crucial role in drafting President Joe Biden’s October executive order on AI — shows how the movement is already notching significant wins.

Despite emerging concerns in Congress about the think tank’s ties, RAND is now in the running to receive a federal grant for AI safety research.
 

The new kids in town

Effective altruists in Washington skew overwhelmingly young and often hail from the country’s top universities, which increasingly serve as hotbeds for the movement. EAs are usually white, typically male and often hail from privileged backgrounds — a combination that doesn’t always endear them to their critics, particularly those on the left.

“I used to describe these people as white, male, vegans, running marathons, very smart — like educated from really classy schools, and so on — really productive and focused,” said Nancy Connell, a biosecurity researcher at Rutgers University with experience working in Washington.

Like many of her peers, Connell calls EA a “cult.” And she said there are some specific tells that show which AI and biosecurity researchers are members.

“They seem to end up putting their names in three letters, like SBF,” said Connell, referring to a nickname used frequently by Sam Bankman-Fried — the billionaire head of defunct cryptocurrency exchange FTX and a huge funder of EA causes before his conviction for stealing as much as $10 billion from his customers. Connell said organizations where she’s recently worked have had “two or three of these Open [Philanthropy] people” who also go by three initials, calling it “this weird kind of cultural thing.”

For other East Coast observers of the EA phenomenon in the AI and biosecurity fields, the weirdness doesn’t end there. They say EAs have brought the San Francisco Bay Area’s unique cultural milieu to the nation’s capital. Like their compatriots in Silicon Valley, they tend to live together in group homes: Multiple tech-policy professionals pointed to a house in Northwest Washington where young EAs rest their heads before fanning out to their respective think tanks, agencies and Hill offices.

Those same observers also spoke of a propensity toward polyamorous relationships within the EA community in Washington — a phenomenon identified by both critics and EAs themselves as an intrinsic part of the movement’s Northern California chapters.

“I think the Bay Area in general is very susceptible to cults and also very susceptible to experimental social arrangements,” said Graves.

Perhaps the most noteworthy thing about EAs in Washington, however, is their almost messianic belief in the apocalyptic potential of AIs lurking just over horizon — and their unshaking certainty that they alone can prevent human extinction.

“They literally believe that they’re saving the world. That’s their mission,” said Émile Torres, a philosopher and former effective altruist who left the movement a little over five years ago.

 

‘Those people are full of it’

Some EA defenders counter that it’s the movement’s critics, including lapsed adherents like Torres, who are a little overheated.
 

The Privacy, Technology, and the Law Subcommittee of the Senate Judiciary Committee held a hearing on "Oversight of A.I.: Principles for Regulation" on July 25, 2023, in Washington, D.C.

A Senate Judiciary subcommittee held a hearing on "Oversight of A.I.: Principles for Regulation" in July. Experts are warning that the tech-funded flood is reshaping Washington’s policy landscape. | Alex Wong/Getty Images


“It’s easy to be cynical in D.C.” said Samuel Hammond, senior economist at the Foundation for American Innovation and an EA ally. But despite the mountain of tech dollars fueling their movement and their close ties to the AI industry, effective altruists, he said, truly believe what they’re saying about AI safety.

“You make it sound sort of dismissive when you put it that way,” Hammond told POLITICO outside of a House hearing room in December, where he’d just testified on the White House’s AI policy. “When you say ‘believe this stuff’ — like, believe that pandemics can be bad? Believe that the mind is a neural network, and we’re building digital minds and they’re going to get scary powerful? That’s true.”

Hammond bristled at the notion that EAs and others focused on “AI safety” in Washington are trying to distract lawmakers from current AI harms, such as how facial recognition tools supercharge racial bias.

“I think those people are full of it. Talk about an ecosystem of astroturfed activists,” he said. “Literally billions of dollars have gone into DEI-style ‘woke’ politics over the last three years, and that’s a huge effort to make every issue backwards-compatible with culture war debates of five years ago.”

Hammond said Washington can, and should, work simultaneously to address near-term AI risks alongside existential worries. But he also said questions about “whether face recognition has a melanin bias” are driven by “cherry-picked examples,” calling them a “major distraction” from efforts to regulate AI.

Many longtime AI and biosecurity researchers in Washington say there’s much more evidence backing up their less-than-existential AI concerns. While most acknowledge the possibility that existential risks could one day emerge, they say there’s so far little evidence to suggest that future AIs will cause such destruction, even when paired with biotechnology.

Deborah Raji, a Mozilla fellow and AI researcher at the University of California, Berkeley, who focuses on how AI can harm marginalized communities, was left fuming by a study conducted by Open Philanthropy-funded researchers that suggested large language models, like OpenAI’s ChatGPT, could supercharge the development of bioweapons.

“If you dug at the research for even like a minute, you would see all of this is stuff that you can literally Google and find online,” Raji said. “There’s nothing exceptional about the fact that it’s coming from an LLM versus Google.”
 

A couple throw a frisbee for a dog on the campus of The Johns Hopkins University
The Johns Hopkins Center for Health Security is a Baltimore-based organization that’s received extensive funding from Open Philanthropy. Rob Carr/Getty Images

 

Two biosecurity researchers — who requested anonymity in order to avoid retaliation from EA-linked groups that fund their organizations — also said they’ve seen little to no evidence that current or future AI models will significantly raise biosecurity risks.

As EAs bring their message to virtually every corner of the nation’s capital, experts are warning that the tech-funded flood is reshaping Washington’s policy landscape — driving researchers across many organizations to focus on existential risks posed by new technologies, often to the exclusion of other issues with firmer empirical grounding.

During a recent stint at the Johns Hopkins Center for Health Security — a Baltimore-based organization that’s received extensive funding from Open Philanthropy — Connell claimed that her superiors repeatedly pressured researchers to focus on “global catastrophic biological risks,” even if that meant ignoring other topics.

“There was an emphasis at the Center for Health Security at Hopkins on how issues could be characterized as catastrophic,” said Connell, who added that the emphasis was explicitly made to secure future grants from Open Philanthropy.

A Center for Health Security spokesperson said the organization’s work to address large-scale biological threats “long predated” Open Philanthropy’s first grant to the organization in 2016.

“CHS’s work is not directed toward existential threats, and Open Philanthropy has not funded CHS to work on existential-level risks,” the spokesperson wrote in an email. The spokesperson added that CHS has only held “one meeting recently on the convergence of AI and biotechnology,” and that the meeting was not funded by Open Philanthropy and did not touch on existential risks.

“We are very happy that Open Philanthropy shares our view that the world needs to be better prepared for pandemics, whether started naturally, accidentally, or deliberately,” said the spokesperson.

In an emailed statement peppered with supporting hyperlinks, Open Philanthropy CEO Alexander Berger said it was a mistake to frame his group’s focus on catastrophic risks as “a dismissal of all other research.”

Berger said Open Philanthropy’s work to prevent pandemics was previously “ derided as speculative ... but as COVID emerged we wished we had done more sooner.”

“Data to inform projections about potential catastrophes is inherently limited because they haven’t happened yet, but importantly that does not mean concerns about risks are unfounded or worth dismissing given the stakes,” the Open Philanthropy executive said.

Berger said the existential AI risks being elevated by his organization “are now recognized by a broad set of experts from academia and civil society, as well as those working to build the technology.”

“There is genuine and important expert disagreement on this topic and we have no desire to paper over it, but we think it’s crucial to weigh the arguments, not dismiss them,” Berger said.
 

All Souls College in Oxford
Effective altruism first emerged at Oxford University in the United Kingdom as an offshoot of rationalist philosophies popular in programming circles. | Oli Scarff/Getty Images

 

A new movement, consumed by AI angst

Effective altruism first emerged at Oxford University in the United Kingdom as an offshoot of rationalist philosophies popular in programming circles. It initially emphasized a data-driven, empirical approach to philanthropy. Projects such as the purchase and distribution of mosquito nets, seen as one of the cheapest ways to save millions of lives worldwide, were given top priority.

“Back then I felt like this is a very cute, naive group of students that think they’re gonna, you know, save the world with malaria nets,” said Roel Dobbe, a systems safety researcher at Delft University of Technology in the Netherlands who first encountered EA ideas 10 years ago while studying at the University of California, Berkeley.

Animal rights and climate change also became important motivators of the EA movement. But as its programmer adherents began to fret about the power of emerging AI systems, many EAs became convinced that the technology would wholly transform civilization — and were seized by a desire to ensure that transformation is a positive one.

As EAs attempted to calculate the most rational way to accomplish their mission, many became convinced that the lives of humans who don’t yet exist should be prioritized — even at the expense of existing humans. The insight is at the core of “longtermism,” an ideology closely associated with effective altruism that emphasizes the long-term impact of technology.

“You imagine a sci-fi future where humanity is a multiplanetary ... species, with hundreds of billions or trillions of people,” said Graves. “And I think one of the assumptions that you see there is putting a lot of moral weight on what decisions we make now and how that influences the theoretical future people.”

“I think while well-intentioned, that can take you down some very strange philosophical rabbit holes — including putting a lot of weight on very unlikely existential risks,” Graves said.

Dobbe said the spread of EA ideas at Berkeley, and across the Bay Area, was supercharged by money that tech billionaires were pouring into the movement. He singled out Open Philanthropy’s early funding of the Berkeley-based Center for Human-Compatible AI, which began with a $5.5 million grant from the group in 2016. Open Philanthropy has since put another $11 million into the organization, which also receives support from the Musk-backed Future of Life Institute.

Since his first brush with the movement at Berkeley 10 years ago, the EA takeover of the “AI safety” discussion has caused Dobbe to rebrand.

“I don’t want to call myself ‘AI safety,’” Dobbe said. “I would rather call myself ‘systems safety,’ ‘systems engineer’ — because yeah, it’s a tainted word now.”

Torres situates EA inside a broader constellation of techno-centric ideologies that view AI as a nearly godlike force. If humanity can successfully pass through the superintelligence bottleneck, they believe, then AI could unlock unfathomable rewards — including the power to colonize other planets or even eternal life.

But over the last five years, said Torres, Silicon Valley thinkers convinced of AI’s transformative impact began to split. While some still believed in the coming AI utopia, others became convinced that an apocalypse was far more likely.

“Some people continue to think that creating a controllable superintelligence will be pretty easy,” said Torres. “And then you’ve got the doomers ... who have become very confident that actually the default is going to be doom — complete annihilation.”

That mindset spread rapidly across Silicon Valley and top universities, with more tech billionaires joining the throng. After nurturing the EA ecosystem on the West Coast, those funders began pumping tens of millions of dollars into a sprawling Washington network meant to fixate policymakers on cataclysmic threats posed by AI and biotechnology.

“I remember five-plus years ago, having many conversations with people in this community about how in the hell do we get government officials, politicians, policymakers and so on, to take seriously this notion of existential risk?” Torres said.

“And now, so far as I can tell — I’m out of the community, so I’ve lost a lot of those behind-the-scenes connections — but it really seems like they’ve been enormously successful in infiltrating these organizations and making the case,” said Torres.

Hammond noted that groups like Open Philanthropy began building a policy network nearly 10 years ago to address existential AI risks “when it wasn’t on anyone’s radar.”

“I think you gotta give the EA movement a lot of credit for being way ahead of the curve on AI,” he said.

By the time Washington began to drill down on AI rules earlier this year, EA-funded policy professionals were already everywhere — embedded at the White House and federal agencies, ensconced in key congressional offices and woven through influential think tanks.

“There’s a whole world of tech people who don’t agree,” said Hanson. “But those other tech people don’t have funding or organizations to make a counter-position.”

 

Washington weighs its options

At a November screening on Capitol Hill of a short film, produced by the Future of Life Institute, that laid out how AI could cause a nuclear holocaust, Javorsky, the researcher from the Musk-backed institute, cornered Rep. Ted Lieu (D-Calif.) and urged him to clamp down on open-source AI models.

Ted Lieu
Ted Lieu wondered why he wasn’t getting pressure from tech-funded groups to support laws that would lessen AI’s existing or near-term harms. | Anna Moneymaker/Getty Images

 

Together with Sen. Ed Markey (D-Mass.) and a handful of other lawmakers, Lieu had introduced legislation in April to ensure that the Pentagon would never allow an AI to unilaterally launch a nuclear weapon. Now Javorsky was pushing Lieu to restrict the ability of AI developers to release details of their models to the public, warning that open-source AI would empower bad actors with powerful new capabilities.

But Lieu, an influential voice on AI, was noncommittal. And in an interview several minutes later, he said he wasn’t sure if AI truly posed the type of earth-shaking threats that Future of Life and other groups insist are coming.

“I think most people in the tech industry who are familiar with AI, who talk about existential threats of AI, I believe that they believe what they’re saying,” Lieu said.

Lieu said it was important to consider how future AI systems could cause serious damage to humanity. But he also wondered why he wasn’t getting pressure from tech-funded groups to support laws that would lessen AI’s existing or near-term harms.

“Very few are coming out saying, ‘Hey, regulate facial recognition technology because there’s a bias there,’” Lieu said. “I think it’s more that they talk about these issues, and they don’t talk about other ones that I think they really should also talk about.”

Like Lieu, many lawmakers express some skepticism about the growing chorus predicting an AI catastrophe.

In a conversation with reporters outside of an October Senate forum on AI policy, Schumer — who’s made it his mission to pass substantial AI rules this Congress — typified Capitol Hill’s mixed feelings on cataclysmic AI risks.

“Some of the people in the room said it’s not such a worry. Others said it is,” Schumer said. “When it’s that severe, we can’t take the risk. So we’ll look at it seriously.”

But Congress is plainly a welcoming audience for AI doomsaying. In a September hearing on AI-enabled threats — which included testimony from EA-affiliated researchers at RAND and the Open Philanthropy-funded Center for Security and Emerging Technology — Sens. Mitt Romney (R-Utah) and Maggie Hassan (D-N.H.) both agonized over AI’s potential to cause mass destruction.

“I’m in the camp of being more terrified about AI than I’m in the camp of those thinking it’s going to make everything better for the world,” said Romney.

“I, too, am focused more on the potential downsides here,” Hassan added.

Others on Capitol Hill are more circumspect, but still echo EA’s existential concerns.
 

Cory Booker
A handful of lawmakers are pushing back on fears of an AI doomsday, including Sen. Cory Booker (center), who said in September that he hoped to “democratize” AI innovation and avoid unnecessary research restrictions. | Win McNamee/Getty Images

 

In December, Sen. Mike Rounds (R-S.D.) — one of Schumer’s three top lieutenants on AI legislation — said he’s unwilling to place significant brakes on American AI companies while Chinese firms race forward. But the senator, whose office includes two AI staffers funded by Open Philanthropy and top AI companies, also repeated one of the EA movement’s favorite doomsday scenarios.

“I’ll give you one that keeps me up at night — what about a biological war?” Rounds told reporters. “What about a case of where you have new biologics being introduced, and being introduced on a rapid basis using AI?”

A handful of lawmakers are pushing back on fears of an AI doomsday. They include Booker, who said in September that he hoped to “democratize” AI innovation and avoid unnecessary research restrictions.

They also include Vice President Kamala Harris, who delivered an unmistakable, if vague, critique of the existential-risk approach during a November speech at the United Kingdom’s AI safety summit.

And during an October hearing of a House Science Committee subpanel, ranking member Valerie Foushee (D-N.C.) lamented that the Washington debate over AI threats “has an unfortunate tendency to become fixated on dramatic existential risks, like the end of human civilization.” Foushee called those risks “distractions pulled from the world of science fiction,” and warned that they “should not be elevated over concrete, tangible concerns in areas such as equity, bias, privacy, cybersecurity and disinformation.”

Unlike most lawmakers sympathetic to the EA view of AI risks, Booker, Harris and Foushee are Black.

“It’s not a surprise that members of the [Congressional Black Caucus] would be at the forefront of thinking about the near-term harms of AI, because most of those harms are experimented upon Black communities,” said Safiya Noble, a tech researcher and professor of gender and African American studies at the University of California, Los Angeles.

Effective altruism’s critics claim that the movement suffers from a racial blind spot, making its message hard for some in Washington to swallow. And that’s not the only issue facing EAs as they try to focus lawmakers on AI’s catastrophic potential.

“The rumors I heard is that they were having trouble getting through,” said Hanson. “People still saw them as out-of-touch and arrogant, and it was hard to relate. They weren’t playing the usual games of asking for modest, concrete things. They were asking for huge things — ‘This is going to kill us all unless we radically change everything.’”

The AI and biosecurity researcher in Washington agreed that there’s a “limit” to how far most policymakers have been willing to indulge EA’s obsession with existential AI risks.

“You can wow a policymaker in your first two meetings with scary hyperbole,” the researcher said. “But if you never show up with something they can do about it, then it falls off their mind.”

 

Tipping the scales

Despite clear challenges, the flood of tech dollars flowing to EAs and their allies suggests the movement will continue to play a dominant role in Washington’s AI debate. And even with some setbacks, most observers are surprised at how thoroughly EAs have infiltrated key policy nodes.

“Compared to even some people who have been in D.C. for many years trying to shape things — who have been very ineffective at it and spending even more money — it’s not a bad early effort,” said Graves.

Part of that success can be attributed to the virtual absence of meaningful opposition. But there are signs that tech-industry funders with a brighter view of AI’s future are preparing to fight back.

Some AI optimists have started to style themselves as “effective accelerationists” and are pushing back on plans to slow down or control the technology. The accelerationists, led by venture capitalists like Marc Andreessen, are centered overwhelmingly in Silicon Valley — but they could soon start to throw their weight around in Washington.

In December, Meta and IBM launched an international consortium to promote the development of open-source AI. The two companies have begun to resist EA efforts to slow AI’s progress and appear increasingly eager to offer Washington another narrative.

“The open-source people are very passionate, [and] they’re furious,” Chris Padilla, IBM’s head of government and regulatory affairs, told reporters in November. “They think there’s a cabal to kill open-source AI.”

But until that opposition materializes, many AI and biosecurity experts continue to fret that deep-pocketed doomsayers are distracting Washington with fears of the AI apocalypse.

“The irony is that these are people who firmly believe that they’re doing good,” Connell said. “And it’s really heartbreaking.”

Keywords
You did not use the site, Click here to remain logged. Timeout: 60 second