Tech Experts Sign Open Letter Calling for Temporary Pause on Training AI Systems More Advanced Than GPT-4
In Brief
More than 1100 signatories have signed the open letter calling for a pause on giant AI experiments.
The signatories include Elon Musk, Steve Wozniak, and Emad Mostaque, among others.
The open letter raised concerns about information biases by AI, job automation and the risk of AI on human civilization.
More than 1,100 signatories including leading tech experts have signed an open letter calling for a six-month pause on training AI systems more advanced than GPT-4.
The letter was written by the Future of Life Institute, a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, particularly the existential risk from advanced artificial intelligence.
Quoting the widely endorsed Asilomar AI Principles, which states that “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter says that the level of planning and management isn’t happening.
The letter goes on to say that AI labs are in an “out-of-control” race to develop increasingly powerful artificial intelligence that no one can understand, predict or reliably control. As contemporary AI systems like GPT-4 can compete with humans on general tasks, the letter raises concerns about information biases by AI, job automation, and the risk of losing control on human civilization.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,”
the letter states.
The letter implores AI labs and tech experts to use the period of pause to develop and implement safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.
However, American journalist Jeff Jarvis criticized the letter, saying that it is a “prime specimen of moral panic.”
Some of the leading minds in tech and machine learning who have signed the letter alongside DeepMind research scientists and several university professors from around the world include:
- Yoshua Bengio, University of Montréal, Turing Laureate for developing deep learning, head of the Montreal Institute for Learning Algorithms
- Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”
- Elon Musk, CEO of SpaceX, Tesla & Twitter
- Emad Mostaque, CEO, Stability AI
- Jaan Tallinn, Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute
- Gary Marcus, New York University, AI researcher, Professor Emeritus
- Marc Rotenberg, Center for AI and Digital Policy (CAIDP), President
In a recent letter, Rotenberg wrote that the CAIDP will be filing a complaint with the Federal Trade Commission, calling for an investigation of Open AI and ChatGPT, as well as a ban on further commercial releases of the product until safeguards are established.
OpenAI CEO Sam Altman has admitted that “we also need enough time for our institutions to figure out what to do” and that society is not far away from “potentially scary” generative AI tools.
“We are asking the FTC to “hit the pause button” so that there is an opportunity for our institutions, our laws, and our society to catch up. We need to assert agency over the technologies we create before we lose control,” the CAIDP letter states.
Elsewhere, EU legislators are aiming to sign a deal with EU countries by the end of the year to implement AI rules, though it might hit a bottleneck as debates over how AI should be governed intensify.
Read more:
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Cindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via cindy@mpost.io with press pitches, announcements and interview opportunities.
More articlesCindy is a journalist at Metaverse Post, covering topics related to web3, NFT, metaverse and AI, with a focus on interviews with Web3 industry players. She has spoken to over 30 C-level execs and counting, bringing their valuable insights to readers. Originally from Singapore, Cindy is now based in Tbilisi, Georgia. She holds a Bachelor's degree in Communications & Media Studies from the University of South Australia and has a decade of experience in journalism and writing. Get in touch with her via cindy@mpost.io with press pitches, announcements and interview opportunities.