AI, the Great Uniter – Against AI

AI, the Great Uniter – Against AI

How often do you see leaders from opposite sides of the global political spectrum come together? Almost never. Until now, that is. Scientists from the U.S., Canada, and the People’s Republic of China have joined forces in the International Dialogues on A.I. Safety, an organization formed by the Safe AI Forum which is, itself, part of a US-based research group called Far.AI. FAR AI’s mission “is to ensure AI systems are trustworthy and beneficial to society.” 

So concerned are many of the world’s leading computer scientists that AI has the potential to severely harm humankind, that they have banded together to ensure this highest of high-tech systems doesn’t lead to a catastrophic outcome for all of humanity. The scholars include Yoshua Bengio (Université de Montréal), Stuart Russell (University of California, Berkeley), Gillian Hadfield (Johns Hopkins University), Andrew Yao (Tsinghua University) and Ya-Qin Zhang (Tsinghua University). “If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?” Dr. Hadfield recently told the New York Times.

East meets West

Despite China and the West scrambling to dominate technologies which have the potential to change the balance of power, the group formed to provide a means to communicate about risks AI poses while still maintaining confidentiality regarding companies’ or researchers’ intellectual property.

To achieve this, the group suggested that each country establish an A.I. safety authority which could then identify warning signs should an A.I. system replicate itself or intentionally mislead its creators. Each national authority’s findings would be coordinated by an international body.

The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.

The future is… frightening

In May 2024, U.S. President Biden and Chinese Premier Xi Jinping met in Geneva and agreed that both superpowers should have discussions about A.I. safety. “It’s not like regulating a mature technology,” Mr. Fu Hongyu, director of A.I. governance at the Chinese research institute, AliResearch said. “Nobody knows what the future of A.I. looks like.”

Learn how AI can help your business grow

Subscribe to AI Today

#AI #AI Today #artificial intelligence #Far.ai #AliResearch #Johns Hopkins University #Tsinghua University #Johns Hopkins University

Top 3 Takeaways

  1. Global leaders across the political spectrum have begun collaborating to harness A.I.’s potential to cause catastrophic harm
  2. A.I. is still maturing and its risks are as yet unknown
  3. A system to provide warnings to authorities while protecting companies’ A.I. is crucial.

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.

© 2025 Hologram LLC. All rights reserved.

Designed & Developed by Boolean Inc.