Deepfakes—AI‑generated video or audio—are used by attackers to impersonate executives, family members and government officials, convincing victims to send money or share sensitive information. We all face serious risks and need to be prepared.
Think this is all just FUD? Consider these stories:
- An employee joined a video call with what appeared to be the CFO and colleagues, only to learn later it was all a deepfake used to authorize over $25 million in transfers.
- A bank manager wired $35 million after receiving a call from a director at the bank, whose voice—an AI clone backed by forged emails—he thought he recognized.
- A mother received a ransom call using her daughter’s voice; just three seconds of audio had been enough to create the fake. Although she discovered the call was an AI-generated hoax, a report found that most people can’t distinguish a real voice from an AI-generated one.
- Criminals have used deepfakes to bypass government facial recognition, file fake tax invoices and open accounts with synthetic identities. Researchers logged a 704% increase in AI face-swap attacks against identity verification systems, highlighting the growing sophistication of these types of attacks.
These examples underscore the threats and what’s to come. Attackers can easily generate convincing digital impostors that fool people and biometric systems. The common thread in each case is social engineering, and this calls for layered verification.
What Multifactor Authentication Is
Relying on a single authentication factor (like a password or caller ID) isn’t enough. Multifactor authentication (MFA) requires people to present evidence from at least two of four categories to verify their identity.
- Something You Know: This includes passwords, PINs and answers to security questions. These are the weakest factor because they can be guessed or stolen.
- Something You Have: This is a physical item (like an ID badge), a hardware token or a one-time password (OTP) generated by an authenticator app. Authenticator apps produce six‑digit codes that expire every 30 to 60 seconds, making them very difficult to intercept. SMS (texting) codes are insecure because attackers can intercept them through SIM‑swapping.
- Something You Are: These are your biometrics: fingerprints, facial recognition or iris scans. Everyone’s biometric data is unique; however, if your biometric data is compromised, it cannot be reset like a password.
- Somewhere You Are: A system may verify your physical location via GPS.
Steps For Organizations
Assume every channel is spoofed.
Never rely solely on caller ID, email headers or video presence. Require employees to verify any sensitive request using at least two independent channels, such as a secure messaging platform and MFA. Be sure to involve a verified second person for high‑value transactions.
Deploy MFA everywhere.
Enforce MFA for everything. Use strong factors such as authenticator apps, hardware keys and biometrics. Incorporate adaptive authentication to prompt additional factors when behavior is unusual.
Educate staff about deepfake tactics.
Explain how realistic deepfake voices and videos can be created from just a few seconds of publicly available recordings. Encourage employees to verify people through official channels when asked to transfer funds or share information. Share real‑world cases like those detailed above.
Maintain a trusted contact directory.
Store colleagues’ phone numbers and secure email addresses in a password manager for easy access. When in doubt, call or message through a verified channel instead of responding directly to the initial request. Use code words or verification phrases known only to the parties involved.
Regularly review incident response plans.
Establish procedures for reporting suspected attacks and test them through tabletop exercises. Include communication plans for notifying leadership, law enforcement and potentially affected partners.
Tips For Families And Individuals
AI‑driven scams don’t just target corporations; they prey on emotions. Scammers only need a short audio clip to clone a voice and then claim a family member is in distress. As it’s critical to protect your loved ones, here’s how you can do just that:
- Create a family code word. Establish a unique word or phrase that only family members know and use it to verify urgent calls.
- Slow down and verify. Scammers create a sense of urgency. Take a beat to think critically and call back using a known number.
- Limit what you share. Be cautious about posting personal details online.
- Use MFA on all personal accounts. This includes email, banking, social media and more. A good password manager with a built‑in authenticator and passkeys (often already included on your devices for free) makes this easy.
- Trust your instincts. If you get an “off” feeling, then verify. If something feels wrong, it probably is.
A Final Thought
Deepfakes and social engineering are not science fiction. They are real, they’re happening today and they are an immediate threat to organizations and families.
Criminals exploit publicly available photos and recordings to create realistic digital fakes. Our defense needs to shift to verifying identities. MFA reduces the risk of compromise by 99.22% and provides a practical, scalable approach to restoring trust. By layering the four types of authentication factors and fostering a culture of verification and skepticism, you can keep your company and loved ones safe from the next AI-driven scam.
By Brian Greenberg, CIO at RHR International. This article first appeared on Forbes on 08/25/2025.











Leave a Reply