News Report Technology
July 19, 2023

AI Tool Era Has Arrived for Cybersecurity Exploits: WormGPT, PoisonGPT, DAN

In Brief

AI tools for exploits, cyberattacks, phishing attempts, and business email compromises (BEC) have gained attention for their potential risks.

It enables the creation of junk sites, rapid website creation, and the spread of manipulative news and disinformation.

AI is now emerging as a significant force in defining the next stage of the Internet’s evolution, which has gone through several phases. While the idea of Metaverse once attracted interest, the spotlight has now shifted to AI as ChatGPT plugins and AI-powered code generation for websites and applications are being quickly integrated into internet services.

WormGPT, a tool made recently for launching cyberattacks, phishing attempts, and business email compromises (BEC), has drawn attention to the less desirable applications of AI development.

AI Tool Era Has Arrived for Cybersecurity Threats and Exploits
Credit: Metaverse Post

Every third website appears to use AI-generated content in some capacity. Previously, marginalised individuals and Telegram channels would distribute lists of AI services for various occasions, similar to how news from various websites would be distributed. The dark web has now emerged as the new frontier for AI’s impact.

WormGPT represents a concerning development in this realm, providing cybercriminals with a powerful tool to exploit vulnerabilities. Its capabilities are reported to surpass those of ChatGPT, making it easier to create malicious content and carry out cybercrimes. The potential risks associated with WormGPT are evident, as it enables the generation of junk sites for search engine optimization (SEO) manipulation, the rapid creation of websites through AI website builders, and the spread of manipulative news and disinformation.

With AI-powered generators at their disposal, threat actors can devise sophisticated attacks, including new levels of adult content and activities on the dark web. These advancements highlight the need for robust cybersecurity measures and enhanced protective mechanisms to counter the potential misuse of AI technologies.

Earlier this year, an Israeli cybersecurity firm revealed how cybercriminals were circumventing ChatGPT’s restrictions by exploiting its API and engaging in activities such as trading stolen premium accounts and selling brute-force software to hack into ChatGPT accounts using large lists of email addresses and passwords.

The lack of ethical boundaries associated with WormGPT emphasizes the potential threats posed by generative AI. This tool allows even novice cybercriminals to launch attacks swiftly and on a large scale, without requiring extensive technical knowledge.

Adding to the concern, threat actors are promoting “jailbreaks” for ChatGPT, utilizing specialized prompts and inputs to manipulate the tool into generating outputs that may involve disclosing sensitive information, producing inappropriate content, or executing harmful code.

Generative AI, with its ability to create emails with impeccable grammar, presents a challenge in identifying suspicious content, as it can make malicious emails seem legitimate. This democratization of sophisticated BEC attacks means that attackers with limited skills can now leverage this technology, making it accessible to a wider range of cybercriminals.

With WormGPT, PoisonGPT, and DAN, cybercriminals can automate the creation of highly convincing fake emails tailored to individual recipients, significantly increasing the success rates of their attacks. This tool has been described as the “biggest enemy of the well-known ChatGPT” and boasts capabilities for illegal activities.

In parallel, researchers at Mithril Security have conducted experiments by modifying an existing open-source AI model called GPT-J-6B to spread disinformation. This technique, known as PoisonGPT, relies on uploading the modified model to public repositories like Hugging Face, where it can be integrated into various applications, leading to what is known as LLM supply chain poisoning. Notably, the success of this technique hinges on uploading the model under a name that impersonates a reputable company, such as a typosquatted version of EleutherAI, the organization behind GPT-J.

Read more related topics:

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

More articles
Damir Yalalov
Damir Yalalov

Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet. 

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Custom HTML

by Valentin Zamarin
August 08, 2024

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Read More
Read more
Custom HTML
News Report
Custom HTML
August 8, 2024
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024