US report proposes radical measures for AI safety



summary
Summary

A U.S. government-commissioned report warns of significant national security risks posed by AI and suggests, among other things, banning the publication of open-source models – with jail time if necessary.

A report commissioned by the U.S. government warns of significant national security risks posed by artificial intelligence. In the worst-case scenario, it could pose an existential threat to humanity, according to the report, which was obtained by TIME magazine in advance of publication.

The three authors of the report, titled “An Action Plan to Increase the Safety and Security of Advanced AI,” worked on it for more than a year. They spoke with more than 200 government officials, experts, and employees of leading AI companies, including OpenAI, Google DeepMind, Anthropic, and Meta.

The plan outlines five strategic approaches. These include building safeguards against misuse, strengthening capabilities and capacities to manage AI risks, promoting security research, creating legal foundations for safeguards, and internationalizing these safeguards. The authors also emphasize the need to address both current and potential future risks to ensure the safe and responsible use of AI technologies.

Ad

Ad

The report proposes a series of linked measures to make AI safer. | Image: Gladstone AI

The report recommends a number of far-reaching policies that could fundamentally change the AI industry. For example, it suggests that the US Congress should prohibit the training of AI models above a certain level of computational power. This threshold should be set by a new federal AI agency. As an example, the report cites a threshold slightly above the computing power required to train current cutting-edge models such as OpenAI’s GPT-4 and Google’s Gemini.

Prison for open-source AI?

The report’s authors, Jérémie and Edouard Harris, CEO and CTO of Gladstone AI, respectively, say they are aware that their recommendations will be seen as too harsh by many in the AI industry. In particular, they expect that their recommendation to ban the open-sourcing of weights for advanced AI models, with violations potentially punishable by jail time, will not be popular, according to the TIME report. Such a measure would affect Meta, for example, which is likely to offer an open GPT-4 level model with the planned release of Llama 3. Meta’s head of AI, Yann LeCun, sees open source as an important building block for safer AI.

But given the potential dangers of AI, the “move fast and break things” philosophy is no longer appropriate, they said. “Our default trajectory right now seems very much on course to create systems that are powerful enough that they either can be weaponized catastrophically, or fail to be controlled,” says Jeremie Harris. “One of the worst-case scenarios is you get a catastrophic event that completely shuts down AI research for everybody, and we don’t get to reap the incredible benefits of this technology.”

Tech employees in AI companies anonymously express security concerns

The report reveals significant security concerns among employees at leading AI companies. For example, some respondents expressed strong concerns about the safety of their work and the incentives provided by their managers.

Some respondents expressed concern about what they perceive to be inadequate security measures in many AI labs. “By the private judgment of many of their own technical staff, the security measures in place at many frontier AI labs are inadequate to resist a sustained IP exfiltration campaign by a sophisticated attacker,” the report states. In such an attack, the models of closed AI systems would be stolen and could be used for malicious purposes.

Recommendation

Gladstone AI’s website.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top