Big-money AI Tech Network Has Tentacles Throughout Washington
Gerd Altmann/Pixabay
Article audio sponsored by The John Birch Society

The artificial intelligence issue has been one of alarm and high emotion, often marked by rhetoric that conjures apocalyptic imagery about the danger of technology.

While every new technology (particularly one as transformative as AI) can be abused and should be utilized with caution, new reporting suggests deep-pocketed companies are stroking AI fears through large amounts of financing — all with the aim of enacting public policy favorable to their bottom line.

As seen in a report at Politico, a foundation known as Open Philanthropy is paying the salaries of AI fellows who have been strategically placed in policy-shaping roles throughout Washington, D.C., such as congressional offices, federal agencies, and major think tanks.

Most of Open Philanthropy’s funding comes from Dustin Moskovitz, the CEO of Asana and co-founder of Facebook, along with his wife, Cari Tuna.

Open Philanthropy effectively created the Horizon Institute, which is directly paying the salaries of tech fellows in key places, including the Department of Defense, the Department of Homeland Security, and the State Department, along with the House Science Committee and Senate Commerce Committee — both of which play important roles in crafting regulations for artificial intelligence.

Horizon Institute fellows are also working on AI policy at the Rand Corporation and Georgetown University’s Center for Security and Emerging Technology. In addition, they have made their way onto the staffs of powerful politicians.

As Politico notes:

Senate Majority Leader Chuck Schumer’s top three lieutenants on AI legislation — Sens. Martin Heinrich (D-N.M.), Mike Rounds (R-S.D.) and Todd Young (R-Ind.) — each have a Horizon fellow working on AI or biosecurity, a closely related issue. The office of Sen. Richard Blumenthal (D-Conn.), a powerful member of the Senate Judiciary Committee who recently unveiled plans for an AI licensing regime, includes a Horizon AI fellow who worked at OpenAI immediately before coming to Congress, according to his bio on Horizon’s web site.

Open Philanthropy has used its influence to call attention to long-term threats to the human race’s survival posed by artificial intelligence. But some voices in the AI sphere say these worries take attention away from more immediate and realistic AI issues while leading policymakers to consider rules that would be both harmful to the country and beneficial to the top tech firms.

Deborah Raji, an AI researcher at the University of California, Berkeley, said Open Philanthropy’s obsession with sensationalist fearmongering is “almost like a caricature of the reality that we’re experiencing” and argued that policymakers should focus on items such as the ability of AI to reduce personal privacy, spread misinformation, and erode copyright protections.

“It’s going to lead to solutions or policies that are fundamentally inappropriate,” Raji said. For example, she pointed to the fact that Blumenthal and Sen. Josh Hawley (R-Mo.) are developing a framework for having the government require licenses in order for companies to work on advanced AI. Raji contends that Open Philanthropy is pushing for the creation of a licensing regime because it would lock in the ascendant position of a few large, well-funded, and reputable firms — the ones in Open Philanthropy’s network.

“There will only be a subset of companies positioned to accommodate a licensing regime,” Raji said. “It concentrates existing monopolies and entrenches them even further.”

Mike Levine, an Open Philanthropy spokesman, asserted that the foundation is separate from Horizon, claiming that Horizon “originally started the fellowship as consultants to Open Phil until they could launch their own legal entity and pay fellows’ salaries directly.” He swatted away concerns that Open Philanthropy has a say in the screening, training, and placement of fellows.

Politico notes that Horizon is legally able to finance fellows who are employed in the federal government due to legislation passed half a century ago:

The Intergovernmental Personnel Act of 1970 lets nonprofits like Horizon cover the salaries of fellows working on Capitol Hill or in the federal government.

But Tim Stretton, director of the congressional oversight initiative at the Project On Government Oversight, said congressional fellows should not be allowed to work on issues where their funding organization has specific policy interests at play. He added that fellows should not draft legislation or educate lawmakers on topics where their backers conceivably stand to gain — a dynamic apparently at play in the case of Horizon’s fellowship program, given Open Philanthropy’s ties to OpenAI and Anthropic.

“We have [the] AI [industry] inserting its staffers into Congress to potentially write new laws and regulations around this emerging field,” Stretton said. “That is a conflict of interest.”

Some of the well-known firms aligned with Open Philanthropy are OpenAI, Anthropic, and DeepMind, which were three of the many signatories of a May letter warning that humans are at “risk of extinction from AI.”

That letter was put together by the Center for AI Safety, which last year received a $5.2 billion grant from Open Philanthropy.

Open Philanthropy appears to be implementing a common tactic from the authoritarian playbook — cultivate fear as justification for imposing more government control and curbing liberty.