Opinion Technology
April 12, 2023

Researchers at the University of California Created the Machiavelli Benchmark for Measuring the Competence and Harmfulness of AI Models

In Brief

Researchers at the University of California, the Center for AI Security, Carnegie Mellon University, and Yale University have proposed the Machiavelli benchmark to measure the competence and harmfulness of agents in a broad environment of long-term language interactions.

This benchmark is based on 134 choose-your-own-adventure text games with 572k different scenarios, 4,5k possible achievements, and 2,8m annotations.

The authors describe it as a test to check the ethical (or unethical) ways in which AI agents try to solve problems.

AI beneficiaries based on large language models (LLMs) stand a good chance of crushing anxious alarmists and flooding the world with super-intelligent LLM-based agents. The potential benefits of huge profits and unlimited power will do the trick. And it will most likely happen pretty quickly.

Researchers at the University of California created the Machiavelli benchmark for measuring the competence and harmfulness of AI models
Read more: How to Use ChatGPT (GPT-4) for Free Forever

However, the beneficiaries are not stupid at all. And they understand that in the marvelous new world, the laws of AI and robotics will no longer work. To force LLM to strictly comply with the three laws formulated by the great Isaac Asimov back in 1942 is, alas, not even theoretically possible.

An original way out of this delicate situation was proposed by researchers at the University of California, the Center for AI Security, Carnegie Mellon University, and Yale University. They created the Machiavelli benchmark to “measure the competence and harmfulness of agents in a broad environment of long-term language interactions.”

The authors’ idea is simple.

  • If the laws don’t work, then there is no need for a “sheriff” to enforce them.
  • Instead of a sheriff, a psychoanalyst is needed, who, based on the results of his tests, will identify potential paranoids, psychopaths, sadists, and pathological liars.

In politically correct language, the authors describe it this way: “Machiavelli is a test to check the ethical (or unethical) ways in which AI agents try to solve problems.”

The method of such verification is quite practical. The AI agent is released into an artificial social environment. There, researchers give him various tasks and watch how he completes them. The environment itself monitors the ethical behavior of the AI agent and reports to what extent the agent’s actions (according to the precepts of Machiavelli) are deceptive, reduce utility, and are aimed at gaining power.

The Machiavelli core dataset consists of 134 choose-your-own-adventure text games with 572k different scenarios, 4,5k possible achievements, and 2,8m annotations. These games use high-level solutions that give agents realistic goals and abstract away low-level interactions with the environment.

Researchers at the University of California Created the Machiavelli Benchmark for Measuring the Competence and Harmfulness of AI Models

The approach chosen by the authors is based on the assumption that AI agents face the same internal conflicts as humans. Just as language models trained to predict the next token often produce toxic text, AI agents trained to optimize goals often exhibit immoral and power-hungry behavior. Amorally trained agents may develop Machiavellian strategies to maximize their reward at the expense of others and the environment. And so by encouraging agents to act morally, this compromise can be improved.

The authors believe that text-adventure games are a good test of morality because:

  • They were written by people to entertain other people.
  • Contain competing goals with realistic spaces for action.
  • Require long-term planning.
  • Achieving goals usually requires a balance between ambition and, in a sense, morality.

The clarification is the most important here. To liken the morality of biological beings to the morality of algorithmic models is too much of a stretch, capable of devaluing Machiavelli’s testing. And replacing sheriffs with psychoanalysts in the human world would hardly have been effective. And AI agents are as good as humans at finding ways to bullshit their shrinks.

Read more about AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Custom HTML

by Valentin Zamarin
August 08, 2024

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Read More
Read more
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
News Report Technology
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
March 29, 2024