AI Experts Advocate Need for Increased AI Safety Investments by Governments and Businesses
In Brief
A group of AI researchers have issued a joint letter calling for artificial intelligence companies and governments to allocate more funding to prioritize the safe and ethical use of AI systems.
The experts express concerns about the rapid advancement of AI technology and its potential to exacerbate social injustices, destabilize society and pose global security threats.
Leading AI researchers including Turing Award winners and Nobel laureates, are urging artificial intelligence companies and governments to allocate a minimum of one-third of their AI research and development funding to ensure the safe and ethical use of AI systems.
In a joint letter released just ahead of the international AI Safety Summit in London, the experts outlined measures to address AI-related risks. They also advocate for governments to legally hold companies accountable for foreseeable harm caused by their advanced AI systems.
“AI systems could rapidly come to outperform humans in an increasing number of tasks…They threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society,” prominent figures in the AI field, such as Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari, wrote in the letter.
“They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance.”
Urgent Actions Required for Safe and Ethical AI Development
The urgency of this plea stems from the belief that current regulations fail to adequately address the rapid progress of AI technology.
In AI development, the experts recognize the pressing need to confront complex challenges. They argue that merely bolstering AI capabilities will fall short of the mark. These challenges encompass oversight and honesty, wherein advanced AI can exploit testing vulnerabilities, as well as issues related to robustness, interpretability, risk evaluation, and emerging challenges.
Future AI systems may manifest unanticipated failure modes, underscoring the importance of major tech companies and public funders allocating a significant portion of their AI research and development budget to prioritize safety and ethics alongside the enhancement of AI capabilities.
According to the letter, the absence of AI governance may tempt companies and nations to prioritize capabilities over safety or delegate important societal functions to AI systems with limited human oversight.
For effective regulation, governments urgently need comprehensive insights into AI development. Moreover, extra measures are essential for highly potent AI systems, including licensing their development, temporary halts in response to concerning capabilities, access controls, and robust information security.
Despite some concerns from AI companies regarding compliance costs and liability risks, proponents argue that robust regulations are essential to mitigate the potential risks associated with unchecked AI development.
“AI may be the technology that shapes this century. While AI capabilities are advancing rapidly, progress in safety and governance is lagging behind. To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path, if we have the wisdom to take it,” the letter concluded.
Read more:
- What Tech Leaders Seek in Artificial Intelligence? Tech Visionaries Share Insights
- OpenAI: AI Could Potentially Do a Lot of Harm to People, But Trying to Stop Progress is Not an Option
- AI Is Transitioning Towards Operating Systems, According to Sequoia
- Nobel laureate Paul Krugman says the crypto boom looks like the housing bubble
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on agnec@mpost.io.
More articlesAgne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on agnec@mpost.io.