Richard Mathenge, Worker of OpenAI’s GPT
4.0/10

Richard Mathenge, Worker of OpenAI’s GPT

Richard Mathenge was part of a team of contractors in Nairobi, Kenya who trained OpenAI's GPT models. He did so as a team lead at Sama, an AI training company that partnered on the project. In this episode of Big Technology Podcast, Mathenge tells the story of his experience. During the training, he was routinely subjected to sexually explicit material, offered insufficient counseling, and his team members were paid, in some cases, just $1 per hour.
Personal Brand Presence3 / 10
Authoritativeness2 / 10
Expertise5 / 10
Influence7 / 10
Overall Rating4 / 10

Richard Mathenge taught OpenAI’s GPT models in Nairobi, Kenya, as part of a contractor team. Working as a team lead for Sama, an AI training startup that was a project partner, he accomplished this. Mathenge shares his experience in this Big Technology Podcast episode. Insufficient counseling was provided, he was frequently exposed to sexually explicit content throughout the training, and his teammates sometimes received as little as $1 per hour. Take a listen to learn more about the training process of these models and the human aspect of Reinforcement Learning with Human Feedback.


2023

Mathenge and his colleagues tagged explicit material that they saw and examined regularly for the model while at work. The content’s source was uncertain, so they may classify it as erotic sexual content, child sexual abuse material, unlawful, nonsexual, or some other category. A great deal of what they read appalled them. According to Mathenge, some passages featured depictions of child rape, while others told the story of a man having sex with an animal in front of his child. Mathenge declined to discuss certain ones as they were so insulting.

The kind of work that Mathenge did has been essential to the functionality and enchantment of bots like ChatGPT and Google’s Bard. However, the effort’s human cost has been largely disregarded. By having people identify information, a method known as “Reinforcement Learning from Human Feedback,” or RLHF, makes bots smarter by teaching them how to optimize depending on user comments. Leaders in the field of artificial intelligence, such as Sam Altman of OpenAI, have lauded the technology’s efficacy, but they seldom ever discuss the price that humans must pay to make AI systems consistent with our moral standards. Mathenge and his associates found themselves at the receiving end of that fact.


Latest news about Richard Mathenge


Latest Social posts of Richard Mathenge

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Custom HTML

by Valentin Zamarin
August 08, 2024

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More