How to spot a deepfake


Deep Fake, Ai And Face Swap In Video Edit. Deepfake And Machine Learning. Facial Tracking, Detection And Recognition Technology. Digital Identity Interchange. Computer Software Mockup.

Deepfakes, which use AI to create convincing mock-ups of real people, have gone mainstream. Cybersecurity experts have warned about such counterfeit videos for years, but only recently have these become good enough that they’re practically indistinguishable from the real thing. 

Fraudsters once used so-called spear phishing to target the email in-boxes of business leaders. Now they’re digitally cloning those leaders – as the CEO of advertising giant WPP, Mark Read, warned in an internal memo after deepfakers mounted an unsuccessful attack using his likeness.

Deepfake apps are selling well on the dark web, largely because they automate much of the process for their criminal clientele. And they’re being put to devastating use. In January, for instance, an employee at a multinational firm in Hong Kong sent fraudsters £20m after they instructed her to do so in a phony video call involving likenesses of their CFO and several other colleagues. 

Frontier deepfakes

Deepfake tech is advancing all the time, but it has become hugely more sophisticated this year, reports Dr Andrew Newell, chief scientific officer at authentication firm iProov. 

“In the early days, deepfakes weren’t good at all,” he says. “Over the past four months, they have become very, very good. We think spotting these things is almost impossible.”

Any perceptible flaws in the latest attacks are probably because the perpetrators are still using relatively old methods. The most advanced tech on the market can handle light and shade well. This means that one of the dead giveaways of a deepfake – the misplaced or absent shadow – is becoming rarer.

A criminal will typically use a so-called injection attack, where a deepfake is covertly inserted into a video stream so that it looks like it’s coming from a real camera. They map the face of their target – the “source face” – on to their own face and gain control over the source’s facial movements and the lighting. Applying this power in sync with speech-generation tech, the attacker effectively has a clone at their disposal, which is run through an emulator or virtual webcam.

Newell notes that several deepfake kits have started offering a comprehensive package featuring face-swapping software, a virtual camera emulator and insertion tools. 

“In the past, you’d have needed a relatively high level of expertise to make the deepfake and inject it,” he says. “Now, you can download these kits and, with the same tech, make a face swap and inject it in one go.”

Can you spot a deepfake?

Despite the rapid advance of deepfake tech, some cybersecurity experts note that there are still some telltale signs to look out for – although these might not exist for much longer.

One such expert is Simon Newman, CEO of the Cyber Resilience Centre for London, a government-funded not-for-profit body helping businesses and charities to improve their defences. He says: “It’s becoming much harder to spot deepfakes, but there are a few things you can do.”

Look for details on the face of the person that don’t appear natural – perhaps unusual lip colours, facial expressions or strange shadows. 

The inside of the mouth is sometimes a dead giveaway, says Newman, who advises: “Look for blurring, as criminals often neglect this area.”

It’s becoming much harder to spot deepfakes

He suggests looking at the head when compared with the neck or other parts of the body. Try to spot strange movements or see if they’re out of sync. Do the lips appear to be moving as they would with the matching words? Do the surrounding facial expressions look natural? 

It’s also important to be able to distinguish different types of deepfakes. Face swaps are a little easier to spot in their current form, says Dr Martin Kraemer, security awareness advocate at KnowBe4. But he adds that fully synthetically generated video sequences can be harder to decipher and have improved “considerably” this year.

Kraemer advises looking at the edges of the speaker’s face to detect signs of a swap. The age of the person’s face might seem to differ from that of the rest of their head. Shadows around the eyebrows might look unnatural too. 

Such signs are much less reliable where fully synthesised videos are concerned, Kraemer says, but you can still look for the “right” kind of body language. Someone’s eye movements generally support the statements they are making. Optical gestures in deepfakes are often repetitive, rather than supportive of the spoken word. Also, someone’s lip movements may sometimes not sync with what they’re saying. 

Video Placeholder

Dr Andrew Newell, chief scientific officer at authentication firm iProov, demonstrates the increasing sophistication of deepfake technology

Many deepfakes use exceedingly precise enunciation, he adds. “No one speaks consistently like a news anchor, in an overly polished and meticulous way. Watching out for irregularities – or too much regularity – seems to work for now. But I wouldn’t rely on that in the near future.”

Newell stresses that no one should be too confident that they would always spot a deepfake attack. His firm is trying to combat the threat by creating an ID system that resembles public key encryption. When participants are on a call, iProov’s verification tech will light up the speaker’s face via the device camera and project a pattern of colours unseen by the human eye on to it. Only the authenticator knows what that pattern, which is different on every call, should look like. If there’s a match, this is evidence of “liveness” and the participants don’t have to do anything to prove it. 

The importance of critical thinking 

There’s little doubt that detecting the technical flaws in deepfakes will become ever more difficult as the tools continue to be refined. Maintaining a high level of contextual alertness may therefore prove the most powerful way for employees to counter the risk. 

Kraemer says that it “remains most important to develop critical thinking and emotional awareness for combating manipulation attempts from social engineers”. 

Instead of focusing on people’s eyebrows in a Crime Scene Investigation-style attempt at digital forensics, try to think about the stated purpose of the call and how the participants are interacting. In short, does anything seem out of context?

It may already be too hard for the average person to spot the physical signs of a deepfake, argues Lucy Finlay, client delivery director at ThinkCyber, a provider of risk awareness training. She says that attackers will often try to instil a sense of urgency, putting their targets under time pressure to do their bidding. 

The reason for this is that people are more susceptible to being duped if they’re put under stress. They will tend to display what Finlay calls “system-one thinking”, where they behave intuitively and automatically, as opposed to “system-two thinking”, which is more considered and logical.

“My go-to advice is: check the context,” she says. “Does what you’re seeing and hearing make sense? What is the person trying to get you to do and how does that make you feel? Is it inciting a strong emotion?”

Ultimately, then, avoiding a deepfake scam may no longer be a matter of trusting your eyes to detect deception. It could soon become more of a case of trusting your gut.



Source link

Leave a Comment