According to CNN on the 12th (local time), the AI policy consulting firm Gladstone AI pointed out in a report recently released at the request of the U.S. State Department that “the most advanced AI system could be a threat to the level of human extinction in the worst case scenario, and the U.S. government should intervene.” The report also included a warning that the U.S. federal government has not much time left to avoid the disaster. The report was based on interviews with more than 200 people over a year, including top executives of major AI companies, cybersecurity researchers, weapons of mass destruction experts, and national security government officials. The report warned that the most advanced AI system could be weaponized as a representative risk of AI, potentially causing irreversible damage. At some point, it also raised concerns that it could lose control of the AI system under development, potentially causing destructive consequences to global security. “The rise of AI and AGI (Universal Artificial Intelligence) could destabilize global security in a way reminiscent of the introduction of nuclear weapons,” the report said, adding that there is a risk of AI arms race, conflict, and fatal accidents on the scale of weapons of mass destruction.He urged the government to take drastic measures to counter the threat. The government proposed creating new AI regulators and emergency regulatory safeguards, and limiting the performance of computers that can be used to train AI models. “AI is already an economically innovative technology,” Gladstone AI co-founder and CEO Jeremy Harris told CNN. “But it can also bring serious risks, including catastrophic risks that we need to know.” “The growing evidence, including empirical research and analysis presented at the world’s top AI conferences, suggests that AI could potentially become out of control if it crosses certain limits of its capabilities,” he said. According to the report, AI experts estimated the possibility of AI incident having an irreversible impact on the world this year from 4 percent to up to 20 percent. However, the estimate is unofficial and could be quite biased. The report cited the pace of evolution of AGI as the biggest factor that will determine whether AI poses a risk to humanity. “AGI is considered a major factor in the catastrophic risk of loss of control,” the report said, adding that OpenAI, Google DeepMind, Antropics, and Nvidia have all publicly stated that AGI can be reached by 2028. The U.S. government has commissioned Gladstone AI to report to assess how AI fits the goal of protecting the U.S. national interest at home and abroad, and the report does not express the U.S. government’s view.
EJ SONG
US ASIA JOURNAL