Image: Shutterstock / Gorodenkoff
Picture this: You receive an audio message from your sister. She says she’s lost her wallet and asks if you can send some cash so she can pay a bill on time.
You’re scrolling through social media. A video appears from a celebrity you follow. In it, they ask for contributions toward their latest project.
You receive a video of yourself, showing you in a physically intimate situation.
Just a few years ago, these situations would be likely genuine. But now, thanks to artificial intelligence, a scammer could be contacting you and if you don’t have the ability to tell real from fake, you may easily fall for a plea for cash or a blackmail threat.
For 2025, experts are sounding the alarm about AI and its effect on online security. The technology is supercharging the speed and sophistication of attacks—and in particular, it’s making scamming others using likenesses of both famous people and everyday citizens far, far easier. Worse, security groups say this trend will continue to accelerate.
Further reading: Top 9 phishing scams to watch out for in 2024
Here’s what to watch out for, why the landscape is changing, and how to protect yourself until more help arrives.
The ways AI can pretend to be us
Jon Martindale / IDG
Digital impersonation used to be hard to do. Scammers needed skill or large computational resources pull such a feat off even moderately well, so your likelihood of encountering fraud in this vein was small.
AI has changed the game. Models have been specifically developed to replicate how a person writes, speaks, or looks—which can then be used mimic you and others through email, audio messages, or rendered physical appearance. It’s more complex and polished than most people expect, especially if you think of online scams as foreign princes asking for money, or clicking on a link to reroute a misdirected package.
- Messages: If fed samples, AI can create email, texts, and other messages that imitate how you communicate in written form.
- Audio: With as little as three seconds of exposure to your voice, AI can create whole speeches and conversations that sound like you.
- Video: AI can create realistic images and videos that can portray you in a variety of scenarios, including pornographic.
This style of copycatting is known as deepfakes, though the term is most commonly used to describe the video and image variations. You may have already heard about well-known or popular individuals being victims of these attacks, but now the scope has spread.
Ai-genererad video på ”Elon Musk” använd i ett kryptobedrägeri
Anyone can be a target. Deepfakes make scam and fraud attempts much more convincing—whether you’re the one encountering them or unknowingly providing your likeness to a scam. AI-generated messages, audio clips, images, and videos can show up in these common schemes:
- Social engineering
- Phishing
- Romance or charity scams
- Authority scams (e.g., supposed issues with the IRS, law enforcement, lawsuits)
- Fake promotional materials and websites
- Misinformation campaigns (especially with regard to politics)
- Financial fraud
- Extortion
The result is an online world where you can’t as easily trust what you see—and to stay safe, you can’t act as immediately on information.
How to protect yourself against AI impersonations
For the coming months, cybersecurity experts echo one another in their advice: Trust more slowly.
Or put another way, you should question the intent of messages and media that you encounter, as Siggi Stefnission, Cyber Safety CTO of Gen Digital (parent company of Norton, Avast, and other antivirus software) says.
Think of it as an evolution of wondering if an image is photoshopped—now you should ask yourself if it’s fully generated and for what purpose. Same goes for text-based communications. In Stefnission’s words: “Is it benign or trying to manipulate?”
McAfee
But beyond becoming generally wary of online content, you can still also use existing indicators of a scam to evaluate the situation.
- Time pressure: Do you have a limited amount of time to respond to the situation?
- Urgency: Do you feel you must act now, even if there’s no explicit time pressure? (Ex: an emergency involving a loved one)
- Too good to be true: Does the offer seem unbelievable?
- Requests for personal details: Are you being asked for information like a credit card number, your banking info, social security number, or the like?
- Topic focus: Does the other person keep trying to steer you toward sharing your financial, personal, or otherwise sensitive information (either through a direct or indirect request)?
- Unusual method of contact: Is this the usual way you’d be contacted by the other person or entity?
- Unknown contact: Do you know the individual or organization contacting you?
- Emotional manipulation: Is the message or media trying to use strong feelings like shame, excitement, or fear to compel you to respond?
- Unusual payment methods: Have you been asked to use a different avenue for sending money?
- Secrecy: Are you being asked not to tell anyone about what you’ve been contacted about?
Ultimately, verify before you trust. You may not be able to control if a scammer uses your likeness (much less someone else’s), but a cool head will let you deftly navigate even high pressure situations.
PCWorld
What does that look like pragmatically? Being ready and committed to continuing interactions on your own terms, rather than the other party’s. For example:
- Reaching out to a trusted person for an outside viewpoint, to get perspective when your stress and emotions run high.
- Being ready to hang up in a call and directly dialing a number you have (or look up) for the person or organization to continue the conversation. If you don’t know the individual, you can take down the caller’s details and search for them online separately.
- Searching online to see if you can find corroborating reports that the the audio clip, image, or video you’ve seen is legitimate.
- Switching from text communication (which can be spoofed, just as with phone calls) to a phone call that you initiate, using contact information you already have or obtain on your own.
- Refusing to pay a blackmail fee for pornographic materials bearing your likeness.
You can do this gracefully—and even go one step further and take proactive measures. it can save you a world of panic and headache, as this family discovered when targeted by an AI phone scam. This is particular true if you actively post video, photos, or even audio clips of yourself—or on the flip side, if someone you interact with often does:
- Creating a special passphrase or code word to prove identity
- Securing accounts with passkeys
- Using strong, unique passwords combined with two-factor authentication, if passkeys are unavailable
- Pairing unique passwords or passkeys with unique user IDs or email masks
- Running security software on your PC
- Give only the most necessary information only to conduct business
- Avoid sharing personal info with AI chatbots
- Knowing which topics, causes, and/or people can cause your emotions to override your reason
Microsoft
These suggested steps help defend against impersonation fraud by requiring authentication (code word), putting up stronger barriers to easy account access (passkeys, passwords + 2FA, email masks), and getting outside help for evaluation of communication and media (antivirus apps).
You’ll also prepare yourself better against “hyperpersonal” scams, as Gen Digital’s Steffnission calls them—where an attacker can take the increasing amount of leaked or stolen data available, and then feed that to AI to craft highly personalized scam campaigns. Minimizing the data others have about you reduces the ability to target you with such precision. If that fails, you can guard your blind spots more easily if you know what they are.
The coming help you can expect… from AI
AI impersonation isn’t the only online threat to expect in the coming year and beyond. AI also is fueling increased attacks on organizations—we individuals can’t control resulting data leaks and breaches for unprepared groups.
Likewise, we can’t influence whether bad actors manipulate AI to behave in malicious ways (e.g., autonomous spread of malware), or feed bad data to AI to weaken security defenses.
McAfee
However, AI cuts two ways—like any tool, it can be helpful or harmful. Security software developers are also putting it to use, specifically to combat the rising attacks. For example, leaning on the Neural Processing Units (NPUs) to help consumers detect audio deepfakes.
Meanwhile, on an enterprise level, AI is being used to aid corporate IT and security teams in identifying threats. (Like a holiday uptick in scam emails sent to Gmail addresses.) Consumers may see the results as, say, better phishing detection via their antivirus software.
Outside of these more direct responses, industry executives have visions for 2025 (and beyond) that tackle problems like the millions of social security numbers on the dark web. In Experian’s 2025 data breach forecast, the company suggests dynamic personally identifying information to replace fixed identifiers like driver’s licenses and social security cards. (The forecast comes from the Experian Data Breach group, which offers mitigation services to companies experiencing a data breach). AI would likely be part of such a solution.
But as you can see, this kind of help is still in earlier stages—audio deepfake detection only gets partway there, for example. And the software won’t ever be able to do all the work. Vigilance will always be needed—especially so as we wait for the good guys to better duke it out with the bad guys.
Author: Alaina Yee, Senior Editor, PCWorld
A 14-year veteran of technology and video games journalism, Alaina Yee covers a variety of topics for PCWorld. Since joining the team in 2016, she’s written about CPUs, Windows, PC building, Chrome, Raspberry Pi, and much more—while also serving as PCWorld’s resident bargain hunter (#slickdeals). Currently her focus is on security, helping people understand how best to protect themselves online. Her work has previously appeared in PC Gamer, IGN, Maximum PC, and Official Xbox Magazine.