There needs to be collective global efforts to govern artificial intelligence (AI), where common consensus on the guardrails needed is urgent.
Amid calls for different global measures and initiatives, AI governance will prove complex and requires a universal approach, said United Nations (UN) Secretary-General António Guterres. AI models already are widely available to the public and, unlike nuclear material and biological agents, its tools can be moved without leaving a trace, Guterres said in his remarks to the UN’s Security Council.
Members of the council had gathered this week for its first formal meeting on AI, where speakers included Anthropic’s co-founder Jack Clark and Chinese Academy of Sciences’ Institute of Automation, Zeng Yi, who is a professor and director of the International Research Center for AI Ethics and Governance.
Describing generative AI as a “radical advance” in capabilities, Guterres said the speed and reach that the technology gained had been “utterly unprecedented,” with ChatGPT hitting 100 million users in just two months.
But while AI had the potential to significantly fuel global development and realize human rights, including in healthcare, it also could push bias and discrimination, he said. The technology could further enable authoritarian surveillance.
He urged the council to assess the impact of AI on peace and security, stressing that there already were humanitarian, ethical, legal, and political concerns.
“[AI] is increasingly being used to identify patterns of violence, monitor ceasefires, and more, helping to strengthen our peacekeeping, mediation, and humanitarian efforts,” Guterres said. “But AI tools can also be used by those with malicious intent. AI models can help people to harm themselves and each other, at massive scale.”
As it is, AI already has been used to launch cyberattacks targeting critical infrastructures. He noted that military and non-military AI applications could lead to serious consequences for global peace and security.
Clark backed the need for global governments to come together, build capacity, and make the development of AI systems a “shared endeavor,” rather than one led by a handful of players vying for a share of the market.
“We cannot leave the development of AI solely to private sector actors,” he said, noting that the development of AI models such as Anthropic’s own Claude, OpenAI’s ChatGPT, and Google Bard are guided by corporate interests. With private-sector companies having access to sophisticated systems, large data volumes, and funds, they likely will continue to define the development of AI systems, he added.
This could result in both benefits and threats due to the potential for AI to be misused and its unpredictability. He explained that the technology could be used to better understand biology as well as to construct biological weapons.
And when AI systems have been developed and implemented, new uses for them could be uncovered that were not anticipated by their developers. The systems themselves also might exhibit unpredictable or chaotic behavior, he said.
“Therefore, we should think very carefully about how to ensure developers of these systems are accountable, so that they build and deploy safe and reliable systems that do not compromise global security,” Clark said.
The emergence of generative AI now could further push disinformation and undermine facts, bringing with it new ways to manipulate human behavior and leading to instability on a massive scale, Guterres said. Citing deepfakes, he said such AI tools could create serious security risks if left unchecked.
Also: The best AI chatbots
Malfunctioning AI systems also were a significant concern, as was “deeply alarming” the interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics, he added.
“Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead,” he said. “Without action to address these risks, we are derelict in our responsibilities to present and future generations.”
New UN entity needed to govern AI
Guterres said he supported calls for the creation of a UN entity to facilitate collective efforts in governing AI, similar to the International Atomic Energy Agency and Intergovernmental Panel on Climate Change.
“The best approach would address existing challenges while also creating the capacity to monitor and respond to future risks,” he said. “It should be flexible and adaptable, and consider technical, social, and legal questions. It should integrate the private sector, civil society, independent scientists, and all those driving AI innovation.”
“The need for global standards and approaches makes the United Nations the ideal place for this to happen,” he added.
The new UN body should aim to support nations in maximizing the benefits of AI, mitigate the risks, and establish international mechanisms for monitoring and governance. The new entity also would have to collect the necessary expertise, making it available to the international community, and support research and development efforts of AI tools that drive sustainability.
Guterres said he was putting together a “high-level advisory board for AI” to get things started, with the aim to recommend options for global AI governance by year-end.
He added that an upcoming policy brief on “a new agenda for peace” also would encompass recommendations on AI governance for UN member states. These include recommendations for national strategies on the responsible development and use of AI, as well as multilateral engagement to develop norms and principles around military applications of AI.
Member states also would be asked to agree to a global framework to regulate and boost oversight mechanisms for the use of data-driven technology, including AI, for counterterrorism purposes.
Negotiations for the policy brief are targeted for conclusion by 2026, by which a legally binding agreement will be established to outlaw lethal autonomous weapons that operate without human oversight, he said.
Clark also called on the international community to develop ways to test the systems’ capabilities, misuses, and potential safety flaws. He said it was assuring that several nations had focused on safety evaluation in their AI policy proposals, including China, the EU, and the US.
He further underscored the need for standards and best practices on how to test such systems for key areas such as discrimination and misuse, which currently were lacking.
Zeng noted that current AI systems, including generative AI, were information processing tools that seemed intelligent but had no real understanding.
“This is why they cannot be trusted as responsible agents that can help humans to make decisions,” he noted. Diplomacy tasks, for instance, should not be automated. In particular, AI should not be applied to foreign negotiations among different nations, since it may amplify human limitations and weaknesses to create bigger risks.
“AI should never ever pretend to be human,” he said, while urging for the need for adequate and responsible human control of AI-powered weapons systems. “Humans should always maintain and be responsible for final decision-making on the use of nuclear weapons.”
Global stakeholders now have a window of opportunity to unite in discussions concerning the guardrails needed for AI, said Omran Sharaf, the United Arab Emirates’ assistant minister for advanced sciences and technology.
Before “it is too late,” he urged member states to come to a consensus on the rules needed, including mechanisms to prevent AI tools from pushing misinformation and disinformation that could fuel extremism and conflict.
Similar to other digital technologies, the adoption of AI should be guided by international laws that must apply in the cyber realm, Sharaf said. He noted, though, that regulations should be agile and flexible, so they would not hamper the advancement of AI technologies.