The emerging world of Deepfake
Deepfake videos are a portmanteau word - 'deep' from 'deep learning' and 'fake', obviously, from 'fake'. Deep learning is an advanced Artificial Intelligence (AI) method which uses multiple layers of machine learning algorithms to extract progressively higher-level features from raw input. It's capable of learning from unstructured data - such as the human face. For instance, an AI can gather data on your physical movements.
That data can then be processed in order to create a Deepfake video through a GAN (Generative Adversarial Network). This is another kind of specialized machine learning system. Two neural networks are used to compete with each other in learning the characteristics of a training set (for instance, photographs of faces) and then generating new data with the same characteristics (new 'photographs').
Because such a network keeps testing the images it creates against the training set, the fake images become increasingly convincing. This makes Deepfake an ever more potent threat. Plus, GANs can fake other data besides photos and video. In fact, the same Deepfake machine learning and synthesizing techniques can be used to fake voices.
Deepfake examples
High profile Deepfake examples are not hard to find. One example of a Deepfake is the video issued by actor Jordan Peele in which he used real footage of Barack Obama merged with his own impression of Obama to issue a warning against Deepfake videos. He then showed how the two halves of the merged video looked when separated. His advice? We need to question what we see.
A video of Facebook CEO Mark Zuckerberg appearing to talk about how Facebook ’controls the future’ via stolen user data - notably on Instagram. The original video comes from a speech he gave on Russian election interference - just 21 seconds of that speech were enough to synthesize the new video. However, the voice impersonation wasn't as good as Jordan Peele's Obama and gave the truth away.
But even less well-made fakes can have a remarkable impact. A video of Nancy Pelosi 'drunk' scored millions of views on YouTube – but it was simply a fake made by slowing down a video artificially to give the effect of her slurring her words. And many celebrity women have found themselves 'starring' in revenge porn made by morphing their faces into porn films and images.
Deepfake threats - fraud and blackmail
Deepfake videos have been used for political purposes, as well as for personal revenge. But increasingly they're being used in major attempts at blackmail and fraud.
The CEO of a British energy firm was tricked out of $243,000 by a voice Deepfake of the head of his parent company requesting an emergency transfer of funds. The fake was so convincing that he didn't think to check; the funds were wired not to the head office, but to a third party's bank account. The CEO only became suspicious when his 'boss' requested another transfer. This time, alarm bells rang - but it was too late to get back the funds he'd already transferred.
In France, a recent fraud didn't use Deepfake tech, but used impersonation, together with meticulous copying of Foreign Minister Jean-Yves le Drian's office and its furnishings, to defraud senior executives of millions of euros. Fraudster Gilbert Chikli is alleged to have disguised himself as the minister to ask wealthy individuals and company executives for ransom money to liberate French hostages in Syria; he's currently standing trial.
It's also possible that Deepfake authors could blackmail company presidents by threatening to publish a damaging Deepfake video unless they're paid off. Or intruders could get into your network simply by synthesizing a video call from your Chief Information Officer, tricking employees into giving up passwords and privileges, which then lets hackers run riot all over your sensitive databases.
Deepfake porn videos have already been used to blackmail female reporters and journalists, such as Rana Ayyub in India, who exposes abuses of power. As the technology becomes cheaper, expect to see more uses of Deepfake to blackmail and defraud.
How can we protect ourselves against Deepfake?
Legislation is already beginning to address the threats of Deepfake videos. For instance, in the state of California, two bills passed last year made aspects of Deepfake illegal - AB-602 banned the use of human image synthesis to make pornography without the consent of the people depicted and AB-730 banned manipulation of images of political candidates within 60 days of an election.
But is this going far enough? Fortunately, cyber-security companies are coming up with more and better detection algorithms all the time. These analyze the video image and spot the tiny distortions which are created in the ‘faking’ process. For instance, current Deepfake synthesizers model a 2D face, and then distort it to fit the 3D perspective of the video; looking at which way the nose is pointing is a key giveaway.
Deepfake videos are still at a stage where you can spot the signs yourself. Look for the following characteristics of a Deepfake video:
- jerky movement
- shifts in lighting from one frame to the next
- shifts in skin tone
- strange blinking or no blinking at all
- lips poorly synched with speech
- digital artifacts in the image
But as Deepfakes get better, you'll get less help from your own eyes and more from a good cyber-security program.
State of the art anti-fake technology
Some emerging technologies are now helping video makers authenticate their videos. A cryptographic algorithm can be used to insert hashes at set intervals during the video; if the video is altered, the hashes will change. AI and blockchain can register a tamper-proof digital fingerprint for videos. It's similar to watermarking documents; the difficulty with video is that the hashes need to survive if the video is compressed for use with different codecs.
Another way to repel Deepfake attempts is to use a program that inserts specially designed digital ‘artifacts’ into videos to conceal the patterns of pixels that face detection software uses. These then slow down Deepfake algorithms and lead to poor quality results — making the chances of successful Deepfaking less likely.
Good security procedures are the best protection
But technology isn't the only way to protect against Deepfake videos. Good basic security procedures are remarkably efficient at countering Deepfake.
For instance, having automatic checks built into any process for disbursing funds would have stopped many Deepfake and similar frauds. You can also:
- Ensure employees and family know about how Deepfaking works and the challenges it can pose.
- Educate yourself and others on how to spot a Deepfake.
- Make sure you are media literate and use good quality news sources.
- Have good basic protocols - "trust but verify". A skeptical attitude to voicemail and videos won't guarantee you'll never be deceived, but it can help you avoid many traps.
Remember that if Deepfake starts to be deployed by hackers in their attempts to break into home and business networks, then basic cyber-security best practice will play a vital role when it comes to minimizing the risk:
- Regular backups protect your data against ransomware and gives you the ability to restore damaged data.
- Using different, strong passwords for different accounts means just because one network or service has been broken into doesn't mean any others have been compromised. If someone gets into your Facebook account, you don't want them to be able to get into your other accounts as well.
- Use a good security package such as Kaspersky Premium to protect your home network, laptop and smartphone against cyber threats. This package provides anti-virus software, a VPN to stop your Wi-Fi connections being hacked, and protection for your webcams, as well.
What’s the future of Deepfake?
Deepfake keeps evolving. Two years ago, it was really easy to tell Deepfake videos by the clunky quality of the movement and the fact that the faked person never seemed to blink. But the latest generation of fake videos has evolved and adapted.
There are estimated to be over 15,000 Deepfake videos out there right now. Some are just for fun, while others are trying to manipulate your opinions. But now that it only takes a day or two to make a new Deepfake, that number could rise very rapidly.
Related links
Major Celebrity Hacks and How They Can Affect You
Webcam Hacking: Can Your Webcam Spy on You?
Recommended products: