Psybersafe Blog

Read our short, informative blog posts to understand more about cyber security and how people’s behaviour is key to improving it.

(5 min read)

Why familiar faces fool smart brains - and what you can actually do about it.

You’re three calls into a back-to-back Teams gauntlet. You haven’t eaten lunch. Outlook is having a tantrum. Then: a message from your CFO. "Can you jump on a quick video call?"  

There they are: familiar voice, familiar face, same bad jokes. They need your help. Urgently. Something about a fund transfer.  

So you do it.  

Only... it wasn’t them. It wasn’t even a human.  

blog Aug 25 Shocked dog 700x300

In early 2024, an employee at global engineering firm Arup was persuaded to transfer over £20 million to scammers. What made this heist so heartbreakingly effective? A video call.  

Not just any video call — a deepfake, with simulated versions of the company’s UK-based CFO and several other colleagues. It was a corporate session conducted entirely by code.  

Familiar faces? Faked.  

The consequences? Devastatingly real. 

The real threat isn’t deepfakes - it’s you

Let’s be clear. This isn’t a story about weak firewalls or unpatched systems. It’s about a vulnerability that no software update can fix: your brain. 

blog Aug 25 brain learning 300x210Deepfake scams work because they don’t need to hack your devices. They just need to borrow your heuristics - the elegant shortcuts your brain uses to get through the day without collapsing from decision fatigue.

 

 

 

 

Let’s meet the four horsemen of the cognitive apocalypse:  

  • Familiarity heuristic. We trust what we’ve seen before. If it walks like Steve, talks like Steve, and sighs like Steve, it must be Steve. 
  • Truth bias. We default to believing what we hear and see, because constant scepticism is exhausting and makes you bad company. 
  • Authority bias. When someone senior gives an instruction, we’re wired to comply. Especially if they have a fancy title and a Teams background that looks like it cost money. 
  • Social Proof Bias.  If other people seem to believe or share something, we’re more likely to accept it ourselves. Herd behaviour feels safe — even if the herd is running off a cliff.  

Together, these become the psychological version of a trapdoor: invisible until you’ve already fallen through. 

blog Aug 25 spider web 700x300

Not just private sector fodder - even democracies are getting duped 

 In July 2025, the UK’s Information Commissioner’s Office warned that public trust in digital communication is in a tailspin. Nine out of ten UK citizens fear deepfakes could distort democratic processes - and not without reason.  

“As the UK’s Information Commissioner’s Office has previously warned, our deep-rooted belief in 'seeing is believing' leaves us behaviourally blind to the dangers of synthetic media. And they’re not alone: according to a UK Parliament report, 89% of Britons fear that deepfakes and online misinformation could distort democratic processes.”  

Consider the following rogues gallery:  

  • Fake videos of MPs and political leaders circulated during the last general election, some with eerily plausible campaign messages.  
  • In Italy, a forged video and audio call from a fake version of Defence Minister Guido Crosetto convinced executives to wire over €1 million for a phoney hostage rescue.  
  • In the United States, the FBI issued alerts about scammers using synthetic voices to impersonate government officials — to steal credentials, spread malware, or simply stir chaos. 

blog Aug 25 fake news 700x300

What unites these examples? Not just malicious intent, but our misplaced instinct to believe the familiar, the urgent, and the official.

This is not just a tech problem. It’s a behaviour problem.  

We have a dangerous habit in cybersecurity: when something goes wrong, we immediately ask, “What new software do we need?”  

But deepfakes don’t exploit systems. They exploit psycho-logic — the irrational but oddly effective mental operating system we all run.  

As Rory Sutherland argues in his book Alchemy, “The human mind doesn’t run on logic any more than a horse runs on petrol.” We don’t pause to verify if it feels right. Deepfakes are dangerous precisely because they look and sound “right enough”.  

It’s not stupidity. It’s efficiency. And that, paradoxically, is the real danger.  

What actually works: Behavioural buffers, not just firewalls 

So, what should we do? Well, since we can’t patch the human mind (not yet, anyway), we must design our environments to anticipate its blind spots. 

blog Aug 25 fixing the brain 700x300

Here are 7 behaviourally smart things you can implement right now - no budget approvals or TED Talks required:  

  1. Two channels are better than one.  Never act on a financial request delivered via video call alone. Confirm through a second, unrelated channel - like Slack, SMS, or in person. Redundancy isn’t waste. It’s wisdom.  
  1. Create a codeword for high-stakes requests. Have a standing keyword or passphrase for sensitive transactions. Even “potato waffle” will do. Why? Because deepfakes don’t improvise well. 
  1. Pause is a power move.  Urgency, secrecy and emotion are classic manipulation cues. If something feels uncomfortably urgent, assume someone is trying to bypass your scrutiny. 
  1. Ask rude questions (even if it looks like the boss).  Normalise asking verification questions. “What’s the name of your dog?” works better than “Are you really Steve?” 
  1. Simulate the scam.  Run team drills. Teach people the habit of slowing down when things feel too familiar or too polished. Build the reflex of doubt. 
  1. Scepticism?  Rudeness. Reframe suspicion as a form of care. If someone challenges you on a call, thank them. If no one’s ever challenged you, worry.
  1. Spot the (creepy) clones.  AI-generated voices often sound slightly too clean. Watch for stilted grammar, uncanny timing, or an inability to respond naturally. 

Final thought: The danger isn’t the deepfake. It’s the snap judgement. 

The true threat isn’t that deepfakes exist. The real problem is that we’re evolutionarily optimised to believe things that seem familiar - especially under pressure, alone, or tired.  

As with phishing emails, the goal isn’t to detect the lie every time. It’s to build in behaviours that catch you when your brain can’t.  

Cybersecurity, like marketing, is ultimately about decision making. And decision making is driven by context, emotion, and expectation - not just logic.

blog Aug 25 decision making 700x300

If you want safer organisations, don’t just upgrade your tech stack. Upgrade your decision environments. 

At PsybersafeWe don’t just raise awareness - we rebuild habits. Our behavioural science-led cybersecurity training turns abstract risk into real-world readiness. Through short interactive episodes, relatable stories and habit-forming design, we help organisations move from “I know that” to “I do that”.  

Because cybersecurity doesn’t fail for lack of information. It fails when behaviour doesn’t match belief.  

We don’t train people to fear deepfakes. We train them to pause, question and double-check - even when the call looks just like the boss.  

The result? Fewer knee-jerk clicks. More resilient decisions.  

Security by design — in the mind, not just the machine. 

Find out more: www.psybersafe.com

We love behavioural science. We’ve studied it and we know it works.  If you want to know more about the science of persuasion and influence and behavioural science in general have a look at our sister site https://influenceinaction.co.uk/

Sign up  to get our monthly newsletter, packed with hints and tips on how to stay cyber safe. 


Mark Brown is a behavioural science expert with significant experience in inspiring organisational and culture change that lasts.  If you’d like to chat about using Psybersafe in your business to help to stay cyber secure, contact Mark today.