Opinion SMW Technology
April 05, 2023

6-month Pause in AI Training Is a Good Idea, but It May Not Be Enough

In Brief

The story of the petition and the release of Yudkovsky’s article have polarized society.

The prisoner’s dilemma and game theory suggest that a critical mass of sane people in different places is needed for such problems to be solved.

However, there are a few buts to this idea, such as the lack of consciousness, will, and agency in digital evolution.

The story of the petition to stop developing AI systems more advanced than GPT-4 has noticeably polarized society. The release of Yudkovsky’s article asking for shutting down further development of GPT models added fuel to the fire, especially his passages about the bombing of data centers.

6-month Break For Training Ai Is Good, But Not Enough

From what can be observed online, there seem to be much fewer supporters of the petition than opponents, who have three main arguments:

1– Progress cannot be stopped.
– The petition was invented by OpenAI competitors.
– China won’t wait.
2– You should not be afraid of AI but of people.
– GPT has no consciousness, will, aspirations, or agency, so there is nothing to fear.
– GPT-4 is insanely far from enslaving the world because it does not understand anything, makes silly mistakes, and is generally quite stupid.
– GPT-4 won’t be able to run the world because it doesn’t have arms/legs.
3– The third group does not understand how GPT-4 operates in the first place.

We won’t be concentrating on the third bunch because their stance comes from ignorance. That’s not to say that all AI opponents know everything about AI; you can find plenty of anti-AI people who lack an understanding of the technology and what it entails, although Yudkovsky himself is very deeper into the topic of AI, so he cannot be accused of ignorance.

Let’s have a look at the first group of people–those who believe progress cannot be stopped. They offer different-yet-similar arguments, and overall, those people do not deny AI’s capabilities and the prospects its adoption brings and believe AI development will continue no matter what.

The first thesis about unstoppable progress is a slogan that does not have to be true. People seem to have stopped some scientific experiments, and many once-accepted research projects are not considered ethical anymore, and similar ones would never be allowed today. However, how much more progress in the past was stopped by the Inquisition and other religious persecution, book bans, and murders of scientists in China (and other countries)? That’s something we will never be able to fully grasp due to the survivorship bias.

Other theses involve the approach that can be summarized as “if not us, it’s going to be them.” While this line of thinking may be more understandable, it doesn’t seem to be the right solution either. It is very similar to the prisoner’s dilemma in game theory.

The second group of arguments is the most interesting one. While I personally agree with it to some extent, there are a few buts that need to be addressed. Yes, AI has no consciousness, will, agency, etc. Neither does COVID-19, and yet it wreaked havoc worldwide. Something does not need to have a sense of agency and consciousness to become dangerous; a scenario that life on earth is wiped out because of a virus is not unimaginable. Once its job is done, it can put the chairs on the tables, turn out the lights, and leave forever.

Continuing the biological analogy, there is a wonderful article, “The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities,” which provides many examples of when processes go in an unexpected direction. Even without an evolutionary process, everything can go into the wrong generalization “Goal Misgeneralization: Why Correct Specifications Aren’t Enough For Correct Goals.” Giving free rein to an optimization process can go wrong is discussed in “Risks from Learned Optimization in Advanced Machine Learning Systems.”

At the same time, we generally do not consider the risks of malicious use of AI and any misuse, which only feeds the argument that we need to be afraid of people and not AI itself. However, the truth of the matter is AI can be bad even without malicious people.

I also agree about the fact that the model makes stupid mistakes, but so do humans, and yet we know they can do harm. While models make many mistakes, they can also solve many awesome, cool, and complex problems that the average person cannot solve. From the notorious work “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” I was struck, by AI being able to solve, albeit not always correctly, Fermi problems, rather complex mathematical problems. Its theory of mind and “sort of like the ability to understand” the actions of others are also very impressive. All these things point toward AI’s intelligence; existing, but not quite like ours.

Eliminating at least some of the mistakes AI models make is generally doable, so their intelligence may only grow. All you need to do is add a module for accurate calculations, a fact base, and validation with another model.

The most important thing in this whole story is the speed of AI’s development. GPT models have grown tremendously at a dizzying pace. Now, there are ChatGPT plugins available, you can use GPT models to write code, APIs are available to the large public, and you can use external tools without any prior training.

GPT-4, of course, will not conquer the world, but it can shake the economy a lot. GPT-5, and the future iteration of the model, may not be very far ahead and will likely come with much stronger abilities. And let’s face it, we cannot hope to comprehend the not-yet-existing models; we have not even come close to understanding the models that already exist. In this sense, a six-month break might be a good idea, but it simply may not be enough.

And given that the AI race has started, there is no time to learn.

In general, in view of all this, I also signed the petition. Though I don’t believe it will help in any way.

Read more related topics:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Custom HTML

by Valentin Zamarin
August 08, 2024

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Read More
Read more
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
News Report Technology
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
March 29, 2024