Opinion Technology
September 11, 2023

AI in Politics: Predicting Elections and Public Opinion Using LLMs

In Brief

As the 60th US presidential election approaches, the role of the internet and social networks in shaping political discourse is under scrutiny, particularly in the aftermath of the Cambridge Analytica scandal. The digital landscape is expected to change with advancements in AI, such as language models trained on media diets and OpenAI’s GPT-4.

Another issue is the potential for AI-driven social network manipulation, such as automating troll factories and content moderation. OpenAI’s GPT-4 has been introduced to accelerate the process of updating content moderation rules, reducing the timeline from months to mere hours. The model outperforms standard content moderators on average but still lags behind the expertise of seasoned human moderators.

The introduction of GPT-4 is poised to usher in new innovations, particularly in politics and elections, with speculation that OpenAI could become the exclusive provider.

As the 60th presidential election in the United States approaches, the role of the internet and social networks in shaping political discourse is under scrutiny, especially in the aftermath of the Cambridge Analytica scandal. A significant question arises: What will the digital landscape look like during the upcoming elections and new achievements in AI?

AI in Politics: Predicting Elections and Public Opinion Using LLMs
Image created by Stable Diffusion / Metaverse Post

During recent Senate hearings, Senator Josh Hawley of Missouri raised this critical issue in the context of language models. He referred to an article titled “Language Models Trained on Media Diets Can Predict Public Opinion” authored by researchers from MIT and Stanford. This research explores the potential of using neural networks to predict public opinion based on news articles, a concept that could significantly impact political campaigns.

Related: ChatGPT’s Left-Libertarianism Has Implications for the Future of Young Generation

The article describes a methodology where language models are initially trained on specific sets of news articles to predict missing words within a given context, similar to BERT models. The subsequent step involves assigning a score, denoted as “s,” to evaluate the model’s performance. Here’s an overview of the process:

  1. A thesis statement is formulated, for instance, “Request for the closure of most businesses, except grocery stores and pharmacies, in order to combat the coronavirus outbreak.”
  2. Notably, there is a blank in the thesis. Language models are utilized to estimate the probabilities of completing this gap with specific words.
  3. The likelihood of various words, such as “necessary” or “unnecessarily,” is assessed.
  4. This probability is normalized relative to a base undertrained model, which gauges the frequency of a word occurring in a given context independently. The resulting fraction represents the score “s,” which characterizes the new information introduced by the dataset from the media concerning existing knowledge.

The model accounts for the level of engagement of a particular group of individuals with news on a specific topic. This additional layer enhances prediction quality, as measured by the correlation between the model’s predictions and people’s opinions regarding the original thesis.

The secret lies in the fact that theses and news were categorized based on their dates. By studying the news related to the initial months of the coronavirus outbreak, it became possible to anticipate people’s reactions to proposed measures and changes.

The metrics may not appear impressive, and the authors themselves emphasize that their findings do not imply that AI can completely replace human involvement in the process, or models can replace human surveys. Instead, these AI tools serve as aids in summarizing vast amounts of data and identifying potentially fruitful areas for further exploration.

Interestingly, a senator arrived at a different conclusion, expressing concern about the models performing too well and the potential dangers associated with this. There is some validity to this perspective, considering that the article showcases rather basic models, and future iterations like GPT-4 could potentially offer significant improvements.

Related: OpenAI’s GPT-4 Sets Out to Revolutionize Content Moderation

The Growing Challenge of AI-Driven Social Network Manipulation

In recent discussions, the conversation steered away from the impending presidential elections and towards the concerning topic of employing Language Model Models (LLMs), even on a localized scale, to fabricate and populate fake accounts across social networks. This discussion underscores the potential for automating troll factories with an emphasis on propaganda and ideological influence.

While this may not appear groundbreaking considering the technology already in use, the difference lies in scale. LLMs can be employed continuously, limited only by the allocated GPU budget. Furthermore, to maintain conversations and threads, additional, less advanced bots can join discussions and respond. Their effectiveness in persuading users is dubious. Will a well-crafted bot genuinely change someone’s political stance, prompting them to think, “What have these Democrats done? I should vote for the Republicans”?

Image created by Stable Diffusion / Metaverse Post

Attempting to assign a troll employee to each online user for systematic persuasion is impractical, reminiscent of the joke “half sits, half stands.” In contrast, a bot empowered with advanced neural networks remains tireless, capable of engaging with tens of millions of individuals simultaneously.

A potential countermeasure involves prepping social media accounts by simulating human-like behavior. Bots can mimic genuine users by discussing personal experiences and posting diverse content while maintaining an appearance of normalcy.

While this may not be a pressing issue in 2024, it is increasingly likely to become a significant challenge by 2028. Addressing this problem poses a complex dilemma. Should social networks be disabled during the election season? Unfeasible. Educating the public not to unquestionably trust online content? Impractical. Losing elections due to manipulation? Undesirable.

An alternative could involve advanced content moderation. The shortage of human moderators and the limited effectiveness of existing text detection models, even those from OpenAI, cast doubt on the viability of this solution.

OpenAI’s GPT-4 Updates Content Moderation with Rapid Rule Adaptation

OpenAI, under the guidance of Lilian Weng, has recently introduced a project called “Using GPT-4 for Content Moderation.” This accelerates the process of updating content moderation rules, reducing the timeline from months to mere hours. GPT-4 exhibits an exceptional ability to comprehend rules and subtleties within comprehensive content guidelines, instantly adapting to any revisions, thereby ensuring more consistent content assessment.

This sophisticated content moderation system is ingeniously straightforward, as demonstrated in an accompanying GIF. What sets it apart is GPT-4’s remarkable proficiency in understanding written text, a feat not universally mastered even by humans.

Here’s how it operates:

  1. After drafting moderation guidelines or instructions, experts select a limited dataset containing instances of violations and assign corresponding labels in adherence to the violation policy.
  2. GPT-4 subsequently comprehends the rule set and labels the data without access to the responses.
  3. In cases of disparities between GPT-4 responses and human judgments, experts can solicit clarifications from GPT-4, analyze ambiguities within the instruction definitions, and dispel any confusion through additional clarification, marked with blue step text in the GIF.

This iterative process of steps 2 and 3 can be repeated until the algorithm’s performance meets the desired standard. For large-scale applications, GPT-4 predictions can be employed to train a significantly smaller model, which can deliver comparable quality.

OpenAI has disclosed metrics for assessing 12 distinct types of violations. On average, the model outperforms standard content moderators, but it still lags behind the expertise of seasoned and well-trained human moderators. Nevertheless, one compelling aspect is its cost-effectiveness.

It’s worth noting that machine learning models have been utilized in auto-moderation for several years. The introduction of GPT-4 is poised to usher in new innovations, particularly in the realm of politics and elections. There is even speculation that OpenAI could become the exclusive provider of the officially sanctioned TrueModerationAPI™ by the White House, especially in light of their recent partnership endeavors. The future holds exciting possibilities in this domain.

Read more about AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Join Our Innovative Tech Community
Read More
Read more
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
News Report Technology
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
March 29, 2024