The Amount of Intelligence in the Universe Will Double Every 18 Months, Says Sam Altman
In Brief
The AI apocalypse has begun, with the amount of intelligence in the universe doubling every 18 months.
OpenAI’s Sam Altman once joked that AI would lead to the end of the world, but before that, it would be a huge business.
Eric Hoel, an American neuroscientist and philosopher at Tufts University, argues that AI systems cannot be considered intelligent at this point, as they do not understand the world and do not have a personality that manifests itself in their intentions and actions.
The Bayesian brain hypothesis states that the brain’s primary function is to minimize surprise. If this is true, the AI apocalypse might have already begun.
This suggests that ChatGPT and others are already more versatile in their intelligence than any human being.
The AI apocalypse has begun, changing the meaning of Moore’s Law. Now, the amount of intelligence in the universe will double every 18 months, according to Sam Altman, CEO of OpenAI, the company behind ChatGPT. Just seven years ago, Sam joked: “AI will most likely lead to the end of the world, but before that, there will be a huge business.” Now, people joke he does not part with a “nuclear backpack” so that he can remotely detonate data centers if GPT gets out of control.
Recommended post: ChatGPT Could Cause Irreversible Human Degeneration |
To better understand what Altman is referring to in his tweet, and to find out how not to go crazy during the AI boom, we recommend reading Eric Hoel’s “How to navigate the AI apocalypse as a sane person.” Hoel is an American neuroscientist and philosopher at Tufts University. It’s a good read, as Hoel is a great writer, so we definitely recommend checking this post out.
Let’s just have a look at one of the key points as it summarizes a deep understanding of what is happening and the near future of the apocalypse that has already come. The main argument of “rational techno-optimists,” who believe that nothing extraordinary and overly risky happened with the advent of ChatGPT, is as follows:
- Despite the outstanding results of generative conversational AI (like ChatGPT, Bing, etc.), these AI systems cannot be considered intelligent. They do not understand the world and do not have the motivation of an agent. They do not have a personality that manifests itself in their intentions and actions, and their perceived intellect is nothing more than a simulacrum of intellect. At its core, this simulacrum is just an auto-filler for the next words, reflecting in its probabilistic mirror the colossal, unfiltered corpus of human-written texts from the Internet.
- If so, then there is neither a close prospect of the appearance of a superintelligence nor the risks associated with it (although, of course, you need to prepare for this, most likely on the horizon of decades).
Hoel’s answer to this argumentation is as follows:
- The fact that ChatGPT, for example, is simply an auto-complete of the next words does not imply that it cannot become (or has already become) an intelligent agent. Unlike consciousness, intelligence is a fully functional concept: If something acts intelligently, it is intelligent. If it acts as an agent, it is an agent.
- Here is an explanatory example. There is an influential cohort of scientists—Carl Friston (the most cited neuroscientist) and a host of other famous names—who claim that the purpose of our brain is to minimize surprise. This “Bayesian brain hypothesis” is one of the mainstream ones today. The theory states that, on a global level, minimizing surprise is the brain’s primary function. And while this is just one of several leading hypotheses about how the brain works, let’s assume it’s true. Imagine now that aliens find a human brain, look at it, and say: “Oh, this thing just minimizes surprise! It cannot be the basis of the intellect and therefore cannot be dangerous for the true bearers of the mind. Think: Is “minimizing surprises” really a much more difficult goal than auto-completing text? Or is it actually very similar?
- And if so, then the non-human superintelligence may already be nearby. And the associated risks are already quite real. What else is there to add? Perhaps ChatGPT and others are already more versatile in their intelligence than any human being. ChatGPT and others will likely be active at the same time for the following reasons: reasonable, unreliable, and not similar to the human mind, uncontrollable in any fundamental sense, except for some hastily designed fences.
And if all this is true, then the AI apocalypse has already begun.
Read more related topics:
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet.
More articlesDamir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet.