192337090423
academy
Security
Privacy
Performance
English

What Is Dark AI? Dark GPTs, AI Deepfakes & Other Cyber Threats

AI is transforming the way we work and live, but not always for the better. Whether it’s used to spread misinformation, enable fraud, or create deepfakes, dark AI poses a significant threat. Learn all about how AI can serve malicious ends and what you can do to protect yourself. Then, download an award-winning security app to help stay safer from hackers, scams, and more.

award_av_comparatives_advanced_2025
2025
3 Stars
Advanced+
av_lab_2025
2025
Product
of the Year
av_test_top_product_2025
2025
Top Rated
Product
What-is-Dark-AI-hero
Written by Jeremy Coppock
Published on July 14, 2025
This Article Contains
This Article Contains

    What is dark AI?

    Dark AI, also known as evil AI, is the use of artificial intelligence for malicious purposes, such as cyberattacks, fraud, scams, misinformation campaigns, and data breaches. In some cases, dark AI programs are created with the express aim of perpetuating these malicious acts, while in others, neutral or beneficial AI programs are repurposed to cause harm.

    Dark AI is typically associated with cybercriminals, like hackers, scammers, and other fraudsters, who use AI to create or deploy their schemes. This may enable them to launch more sophisticated attacks or scams faster and easier than ever before.

    However, some definitions also categorize AI used in warfare as dark AI. For example, Israel has used AI programs to assist in selecting targets in Gaza. And in the Russian war against Ukraine, both sides have employed AI for purposes like analyzing battlefield data.

    Concerns about AI’s role in warfare were further fuelled in February 2025, when Google lifted a longstanding ban on the use of AI for developing surveillance tools and weapons.

    Examples of dark AI

    Examples of dark AI include harmful deepfakes used for manipulation and reputational damage, generative AI programs designed to write scam messages, and AI-powered software built to facilitate cyberattacks.

    An illustration showing examples of how dark AI can be used, from reputational damage, phishing messages, to financial fraud.AI can be used for many types of nefarious activities, from creating deepfakes to enabling fraud.

    Deepfake AI

    Deepfakes are AI-created media designed to closely imitate a person's likeness, whether that’s their face, mannerisms, or voice. Deepfake videos are often used to create humorous clips that poke fun at politicians, athletes, and celebrities, but they can also be used to damage a person’s reputation or spread fake news.

    Deepfake pornography is a rising issue at the forefront of the conversation about this technology’s potential for harm. For example, schools in both South Korea and Spain have suffered deepfake epidemics, with children creating and sharing explicit deepfake videos of their classmates.

    Many high-profile public figures, such as singer Taylor Swift, Twitch streamer Pokimane, and U.S. Representative Alexandria Ocasio-Cortez, have also been victims of deepfakes.

    AI misinformation

    AI misinformation involves the use of AI to create and share false, misleading, or unsubstantiated news and other resources. This misinformation is often spread via deepfakes. During Russia’s war against Ukraine, for example, state actors and partisans have repeatedly used deepfakes to disseminate misinformation and propaganda.

    Although these videos may be obviously discernible as fake to some, as the technology improves, deepfakes will only get more realistic. This may result in more successful misinformation campaigns from states and individuals in the future.

    AI-generated fake news sites are also appearing online, often with URLs similar to legitimate, well-known news organizations, such as CBS, the BBC, ESPN, and NBC. In some cases, these sites use AI to summarize and replicate stories from the sites they are imitating. This can result in odd phrasing and article structures, introducing factual inaccuracies and misleading evidence.

    Other dark AI websites are specifically designed to disseminate fake news. For example, USNewsper.com is positioned as a legitimate US national news source, but it has been accused of being an AI-generated site registered in Lithuania. The website gained notoriety for spreading false claims that Joe Biden’s withdrawal from the 2024 U.S. presidential election was actually a hoax.

    Phishing AI

    Scammers can use dark AI to hone and streamline phishing campaigns, which involve sending messages from seemingly legitimate sources to spread malware or fool victims into disclosing sensitive information or sending money.

    Cybercriminals may take advantage of legitimate large language models (LLMs) to make scam emails seem more realistic. Although ChatGPT and other popular LLMs have restrictions — such as content moderation filters, real-time data restrictions, limited technical details, and ethical and legal operating policies — they can be bypassed with careful phrasing. However, scammers can also leverage dark GPTs, programs that function just like ChatGPT but without the guardrails.

    Scammers can also use dark AI to research and target victims, capitalizing on AI’s ability to rapidly process large amounts of publicly available information. They can quickly scrape this data from sources like social media and people search sites to determine who might be a vulnerable target and what would be the best approach for defrauding them.

    After choosing a target, they could also upload a bio about them into a dark GPT and use AI to generate convincing and personalized phishing messages. This potentially allows scammers to produce text or email attacks at a much faster rate.

    AI fraud

    AI fraud refers to the use of dark AI to manipulate individuals, systems, or companies for financial rewards or other unjust advantages. For example, fraudsters used deepfake images of actor Brad Pitt to trick a woman in France into believing she was in a romantic relationship with him and scam her out of over $800,000.

    This is not the only time AI fraud has used deepfake celebrities for deception. Numerous scam companies have used AI-generated imitations of celebrities to advertise their products, such as a fake Tom Hanks promoting a dental plan, a fake Jennifer Aniston fronting a health supplement, and a fake Taylor Swift endorsing a fraudulent Le Creuset kitchenware giveaway.

    AI has also been used for investment fraud, forcing FINRA (the Financial Industry Regulatory Authority) to issue warnings to investors. For example, deepfake celebrity endorsements and fake company websites have enabled pump-and-dump schemes, where fraudsters inflate asset prices, only to sell at the peak and leave investors with losses.

    AI fraudsters have also been known to use dark AI to engage in social media manipulation. By using AI to create and run hundreds of fake accounts on platforms like X, they can make scams look more credible.

    AI hacking

    One of the most consequential uses of dark AI is in hacking. Hackers can use AI programs to accelerate password cracking, evade security systems, automate hacks, and mine data. It is important to remember that AI is not infallible, and there are ways to defend against AI hacks. Having strong, unique passwords, for example, can help mitigate the risk of password-cracking techniques.

    Examples of dark GPTs

    Dark GPTs are large language models (LLMs) that are used for malicious purposes. They use a type of AI known as ANI (artificial narrow intelligence) to create chatbots. These GPTs are often hacked or jailbroken versions of legitimate LLMs, such as ChatGPT but, in some cases, they’re purpose-built with malicious intentions.

    FraudGPT

    FraudGPT, perhaps the most well-known dark GPT, is available on the dark web and advertised as an all-in-one solution for cybercriminals. It can be used to tackle diverse tasks, like writing malware, fraudulent messages, and scam web pages. Additionally, FraudGPT is capable of detecting leaks and vulnerabilities within networks, systems, and applications, as well as monitoring targeted sites, markets, and groups.

    WormGPT

    WormGPT is a dark GPT with a focus on creating malware and other malicious code. It’s only available on the dark web and has been especially prevalent in conducting business email compromise (BEC) attacks. WormGPT was trained using data designed to mimic real-world hacking techniques, making it one of the most dangerous malware creation tools around.

    PoisonGPT

    PoisonGPT was designed to be an educational example of dark AI. Created by a French cybersecurity company, PoisonGPT was built to demonstrate how quickly open-source LLMs could be injected with misinformation by bad actors.

    How to protect yourself against AI threats

    You can help protect yourself from dark AI threats by using strong passwords and multi-factor authentication, limiting your online presence, examining emails and texts, and sticking to verifiable news sources. Here’s a more detailed look at key actions you can take to defend against AI attacks:

    • Exercise caution with communications: Take the time to properly examine emails and texts before engaging with them, even if they appear to be from a friend, family member, coworker, or business.

    • Double-check sources: Make sure to confirm the validity of what you read online to avoid falling for AI misinformation. For example, if you read about a seemingly major news story, check if other well-known sources are reporting the same thing before believing it.

    • Check for discrepancies: If you suspect you’re watching a deepfake video, pay close attention to details like the face and neck, as well as audio flaws, to help determine if the video is fake. If something feels wrong, there’s usually a reason.

    • Use strong passwords: Always use unique, complex passwords for your online accounts. That way, even if one is compromised by a dark AI password-cracking attack, at least your other accounts won’t be at immediate risk.

    • Enable two-factor authentication: Enabling two-factor authentication (2FA) adds an extra layer of security to your accounts, helping prevent hackers from getting access even if they have your password.

    • Use strong internet security: A robust antivirus program can help protect your system and remove viruses and other malware that may be installed if you fall for a phishing attack or click a link on a fake website.

    • Limit your online presence: Deepfakes require source material from which to develop a likeness. You can reduce the risk of your likeness being stolen by limiting the amount of photos and videos you share online — or at least apply privacy settings to your Facebook profile and other social media accounts.

    • Keep online data to a minimum: Try to limit the information you share publicly, such as your birthday, workplace, close family, etc. This will limit the amount of detail scammers or hackers can glean to form convincing phishing emails or other scams to target you.

    Help protect yourself from dark AI

    Dark AI is continuously evolving, arguably making cyber threats more deceptive and dangerous than ever. From deepfakes and AI-driven phishing attacks to sophisticated hacking tools, cybercriminals are using AI to exploit vulnerabilities and manipulate information to serve their malicious aims.

    Get protection by downloading Avast Free Antivirus — the all-in-one security solution that helps detect and block scams, malware, and phishing attempts.

    More Security Articles

    How Is Dark AI Used Maliciously?

    Why Your iPhone Won't Update and 10 Ways to Fix It

    What Is Cash App and Is It Safe?

    How to Unblock a Number on an Android Phone

    How to Find and Retrieve Deleted Text Messages on Your Android

    How to Check Your Credit Score

    TLS Explained: What Is Transport Layer Security and How Does It Work?

    NFTs for Beginners: How to Make Your Own NFT

    What Is Rooting? The Risks of Rooting Your Android Device

    What Is Jailbreaking and Is It Safe?

    How to Find a Lost or Stolen Android Phone

    The Best Internet Security Software in 2024

    Fight scams, block hackers, and prevent threats with Avast Mobile Security

    Avast
    Mobile Security

    Free install

    Fight scams, block hackers, and prevent threats with Avast Mobile Security

    Avast
    Mobile Security

    Free install
    Security Tips
    Security
    Jeremy Coppock
    14-07-2025