Explained | What are deepfakes and how to spot them?
Story highlights
Deepfakes are AI-generated digital content, such as videos or audio recordings, that convincingly manipulate the appearance, speech, or actions of individuals, often making them appear to say or do things they never did.
In the age of rapid technological advancements, a concerning technology known as "deepfake" is raising serious questions about the authenticity of digital content and the potential for misinformation. Deepfakes, a portmanteau of "deep learning" and "fake", are a product of artificial intelligence (AI) and machine learning technologies that can manipulate video and audio content to convincingly alter the appearances, speeches, and actions of individuals.
What are deepfakes?
Deepfakes are like digital magic tricks. They use computers to make fake videos or audio that look and sound real. It's like making a movie, but with real people doing things they never really did.
trending now
Deepfake technology operates through a complex interplay of two key algorithms: a generator and a discriminator. These algorithms work together within a framework called a generative adversarial network (GAN) and it employs deep learning principles to create and refine fake content.
Generator Algorithm: The generator's primary role is to produce initial fake digital content, whether it's an audio, a photograph or video. The generator's objective is to mimic the target individual's appearance, voice, or behaviour as closely as possible.
Discriminator Algorithm: The discriminator then analyses the content created by the generator to determine how authentic or fake it appears.
This feedback loop between the generator and discriminator is repeated multiple times, creating a continuous cycle of improvement.
Why Deepfakes Raise Concerns?
Misinformation and Disinformation: Deepfakes can be used to create convincing videos or audio recordings of individuals saying or doing things they never did. This poses a severe risk in the spread of false information, causing damage to their reputation and influencing public opinion.
Privacy Invasion: Deepfake technology can manipulate innocent people's images or voices for malicious purposes, violating their privacy and potentially leading to harassment, blackmail, or even exploitation.
Election Interference: There is a growing concern that deepfakes could be used to manipulate political events, such as creating fake speeches or interviews that could sway public perception during elections.
Crime and Fraud: Criminals could exploit deepfake technology to impersonate others in fraudulent activities, making it difficult for authorities to identify and prosecute the culprits.
Cybersecurity: As deepfake technology advances, it can become more challenging to detect and prevent cyberattacks that rely on manipulated videos or audio recordings.
Preventing and Detecting Deepfakes
As deepfake technology becomes more sophisticated, researchers and tech companies are working on methods to detect and mitigate the impact of deepfakes. This includes developing algorithms and software that can identify inconsistencies in manipulated content and verify content authenticity.
Here're 7 tips for identifying deepfake audio and video:
Inconsistencies in Facial Expressions and Movements: Pay close attention to the subject's facial expressions, blinking patterns, or unusual movements that may appear unnatural or out of sync with the audio.
Lip Sync Errors: Look for discrepancies between the spoken words and lip movements. Deepfake technology may not always perfectly synchronise audio with the video.
Unusual Lighting and Shadows: Analyse the lighting and shadows in the video. Deepfake content may have inconsistencies in lighting that can reveal manipulation.
Blurry or Misaligned Edges: Check for distorted or blurred edges around the subject's face, especially near the hairline or the chin, which can indicate digital manipulation.
Unusual Backgrounds: Deepfakes may introduce inconsistencies in the background or surroundings. Look for strange patterns, reflections, or anomalies.
Audio Anomalies: Look out for audio glitches, background noise, or changes in voice tone that may signal audio manipulation.
Use Deepfake Detection Tools: Several online tools and software applications are designed to identify deepfake content. You can use these tools to analyse media for potential manipulation.
There have been several high-profile cases involving deepfake videos of celebrities, politicians and other prominent figures. Several deepfake videos of politicians like Barack Obama, Donald Trump, and Vladimir Putin are currently doing rounds on social media. Bollywood actress Rashmika Mandanna is the latest celebrity victim of a deepfake video. A morphed clip of a girl entering a lift wearing a form-fitting black dress with a plunging neckline has gone viral on social media. While many believed that it was Mandanna because of the morphed face, a few questioned the authenticity of the video due to its glitches and anomalies, and later it was revealed that it was generated using deepfake technology. Even Bollywood legend Amitabh Bachchan called for legal action against the dissemination of the morphed video. The actress herself described the situation as "extremely scary."
Katrina Kaif's morphed image has also been doing the rounds on social media. A still from her latest film Tiger 3 has raised further questions about the risks of deepfake technology. The original image depicts the Bollywood star engaged in a fight scene with a Hollywood stuntwoman, both clad in towels. However, the altered version, which has gone viral, shows Kaif wearing a low-cut white bralette and matching bottoms instead of the towel. This image manipulation was executed using AI tools, which can modify and superimpose individuals' faces onto videos and images, resulting in misleading or fabricated content.