What is AI?
In the simplest terms, artificial intelligence is a type of computer program. It can’t think, reason, or feel, but it’s designed to seem like it does. It’s “trained” to spot patterns through a complex process involving vast datasets and rounds of feedback, resulting in its ability to create text-based outputs, complete complex tasks, and even improve its own performance.
This capability is called machine learning, but AI programs don’t learn in exactly the same way humans do. They rely solely on pattern recognition to answer queries or respond to commands, essentially predicting what constitutes a “good” output by referring to the data they were trained on.
AI is designed to perform human-like tasks.
Why is AI so dangerous?
AI has become increasingly powerful and ubiquitous in recent years, used in everything from social media algorithms determining what ads and posts you see to systems helping doctors spot diseases. But in the wrong hands, AI can be misused. It has the potential to amplify existing online threats like phishing by allowing scammers to launch social engineering attacks more efficiently, among other dangers.
AI text-based responses can also be unreliable, with even the most powerful AI models prone to generating mistakes known as “hallucinations.” These errors can be innocuous, like misspelling the word strawberry, but they can also be serious, with tools potentially generating completely false claims and presenting them as fact. Without taking proper precautions, users may submit AI-generated text containing errors in homework assignments or publish it in newspaper articles, leading to more widespread misinformation.
In addition, AI usage has been linked to neural and behavioral consequences, according to a recent MIT study, and the technology presents countless questions about privacy and ethics.
What are the risks of artificial intelligence?
AI has introduced entirely new risks, while others have become exacerbated. The accessibility of AI technology has led to a rise in criminal activities, such as the generation of deepfakes used for impersonation scams. But, given the blistering pace of AI development, it’s difficult for safety-conscious individuals, law enforcement, and private organizations to keep up.
Privacy risks
Privacy concerns are one of the most common issues raised by experts when talking about AI. These systems are only as strong as the data used to train them — and often, that data can be yours, whether that’s your personal information or records of the conversations you have with AI tools.
OpenAI’s ChatGPT, for example, keeps your data and conversation history on file for at least 30 days. Additionally, there are currently very few AI regulations, both in general or concerning data privacy. This makes dystopian notions like predictive policing algorithms, a social surveillance method, all too real in the U.S.
Deepfakes and misinformation
Deepfakes are AI-generated videos or voice clips that imitate a person’s likeness, making it appear as though they said or did something that never actually happened. This technology can be exploited for harmful purposes, from spreading political misinformation campaigns or fake news to creating voice clones which are later used in vishing attacks. Check out this MIT project on deepfakes to learn more about how to spot them.
AI-assisted scams
AI technology has opened the door to more effective and efficient cyberattacks: giving scammers better tools to collect and sell your data, for example, or enabling amateur fraudsters across the world to write convincing, error-free scam messages. The Federal Communications Commission (FCC) warns not to answer unknown numbers. Even saying “yes” to a scammer who records your voice could let them authorize fraudulent charges in your name.
AI may help scammers work smarter, but this technology can also work for you. Check out our AI-powered scam detector tool, which helps block scams and keep your devices free of malware.
AI-generated malware attacks
AI programs like ChatGPT can be used to generate more than just text and images. They’re also capable of writing malware programs that can be used in sophisticated cyberattacks. Bad actors can abuse any technology, but AI has made it even easier to accomplish dangerous outcomes. OpenAI has even admitted they know this is happening, releasing multiple reports detailing how they're mitigating AI threats.
Addressing AI bias
Because AI relies on data to function, if that data is biased, the AI’s output will also be biased. Biases can come in many shapes and forms, whether it’s due to incomplete or flawed data or human bias determining what data is or isn’t important enough to be included. These bias types can feed into each other, exacerbating the problem.
What the experts say
"AI researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities."
Olga Russakovsky, Assistant Professor in the Department of Computer Science at Princeton University,
The New York Times
On top of this, UNESCO says that only 100 of the world’s 7,000 natural languages have been used to train top chatbots. This could lead to new or exacerbated societal issues and greater inequality, like screening job applicants unfairly.
Human interactivity
AI-driven communication and interactions have become more and more ubiquitous, with users turning to AI instead of a web search or even therapy. This potential for increasing reliance could lead to lowered empathy and social skills, and a surge of loneliness and loss of human connections.
There are also physical safety concerns to consider with some applications or types of AI. The technology could contribute to medical misdiagnoses, machinery malfunctions, and even direct harm, such as when a chess robot broke a boy’s finger during a tournament.
Legal responsibility
When these kinds of accidents happen, there’s little precedent to date in terms of assigning legal responsibility — is the AI liable? Or is its programmer at fault, or the company who implemented it, or the human operator? In one particular case where an Uber self-driving car killed a pedestrian, the test driver was determined to be at fault.
The potential dangers of AI to humanity
On a larger scale, AI systems can pose direct or indirect threats to humankind and the planet. From autonomous weapons to environmental destruction to weakening ethics, and potentially unforeseen future consequences, here’s what’s at stake.
Ethical dilemmas in AI
When it comes to ethics, AI doesn’t have the best track record. Teachers have struggled with students of all ages using AI to write papers or cheat on homework, and will no doubt continue to do so. One study shows that even medical students are increasingly using AI in their coursework, preferring to turn to it over “other traditional resources such as professors, textbooks, and lectures.”
It’s important to remember that AI uses data to quickly answer a question, and often provides false information. This is a limitation of the nature of AI technology — it doesn't actually know the answer to any question, just whatever sequence of words it has determined as the most likely answer based on its training.
Another ethical question is raised by the use of generative AI in the creation of art and creative writing. While many people praise AI for empowering anyone to create art, AI models are trained on images, music, and writing made by real people, often without compensation or even the creator’s consent.
An example of what the AI image generator Leonardo can create.
The New York Times sued OpenAI and Microsoft in 2023 (case is ongoing) over allegations that it used millions of articles without permission to train ChatGPT. Artistic creators have been fighting back too, altering their artworks with programs designed to mislead or “poison” the AI that tries to use it.
Environmental impact of AI
Now, all of the above are ways people have used or misused AI. However, none of these traits are intrinsic to AI itself. All technology has an environmental footprint, and while the full impact of AI is still being explored, the initial findings don’t look good.
AI requires powerful computer chips, which are manufactured using primarily nonrenewable resources. And because the demand for AI has increased, more and more data centers have cropped up.
What the experts say
"What is different about generative AI is the power density it requires. Fundamentally, it is just computing, but a generative AI training cluster might consume seven or eight times more energy than a typical computing workload."
Cooling these massive data centers requires lots of water. Globally, AI-related infrastructure is estimated to soon consume six times more water than Denmark, which may raise environmental alarm bells when a quarter of humanity already lacks access to clean water.
It’s not all doom and gloom, though. Even the UN Environment Program uses AI — to detect when oil and gas installations leak methane gas. With these advances in AI technology comes the hope that it could help us tackle big issues like climate change.
How AI risks are being mitigated
So, that was a lot. Let’s take a breath. It’s not all risks — there are also many benefits AI has to offer, like automating repetitive tasks, minimizing human error, and identifying cancer cells.
It’s not just on you to navigate this strange new world by yourself. Here’s a snapshot of how AI risks are being mitigated at different levels of society.
-
What governments are doing: The EU's Artificial Intelligence Act came into force in August 2024 and places controls on high-risk systems used in areas such as healthcare, law enforcement, education, or elections. Additionally, over 190 countries have adopted UNESCO’s non-binding recommendations on the ethical use of AI.
-
What companies are doing: 78% of companies use AI. They’re already redesigning workflows to implement AI and appointing senior leaders in critical governance roles. To effectively mitigate threats and track AI performance, companies should conduct regular tests and monitoring, establish robust governance frameworks, and prioritize privacy, security, accountability, and transparency.
-
What you can do: Decide for yourself if you want to use AI and learn how to use it responsibly. Stay up to date on the latest developments in not only the technology itself, but also the ways it can harm you: including scams, privacy violations, and other cybersecurity issues.
Secure your digital life with Avast
As AI continues to shape our lives, it’s okay to be nervous about what’s out there. But, whatever you do, stay in control of your digital privacy and security with Avast Free Antivirus. With real-time threat detection, including an AI-powered scam guardian that makes AI work for you instead of against you, Avast Free Antivirus has your back. Help safeguard your device and protect yourself against hackers, scams, and malware for free.