Deepfake Resistant

  • Dec 01, 2023
  • 3 Min Read
Deepfake Resistant

A political leader playing Garba, or Morgan Freeman saying “I’m not Morgan Freeman”, or be it Tom Cruise on TikTok, all of this was definitely strange because humans could create something that never happened!

Deepfakes problem

Thanks to artificial intelligence (AI), these fake videos seem so much real, be it the audio or the video. Nowadays there are platforms that can create such deepfakes by taking a few keywords as inputs. But what exactly are deepfakes?

What are deepfakes?

Deepfakes are fake content generated through AI. Deepfakes can be fake audio or video content. Behind such applications that generate deepfakes, Machine Learning techniques are used through which one can create something artificially.

It involves an encoder-decoder system that upgrades the image/video.About how deepfakes are created, the India Express explains:

'The algorithm traces similarities between the faces and later brings them down to their commonly shared features. Different sets of decoders are trained on both faces. In order to create the deepfake, the creator simply needs to swap the faces from their respective decoders. To make it simple, a compressed image of person X is fed into the decoder which is trained on person Y. The decoder later restructures the face of X based on the expressions of Y.'

What are deepfakes

Origins of Deepfake:

It comes from two words viz, Deep learning and Fake. It started when a Reddit user used their username ‘deepfakes’ in 2017 to post explicit content of celebrities.

Deep learning is a subset of Machine learning methods that perform complex tasks accurately without human intervention. It involves multiple layers, hence the term ‘deep’.

While this technology is used in movies to create visual effects, in Augmented Reality, entertainment, education, etc, all is not well with deepfakes.

Why are deepfakes a problem?

More than being used for positive use cases, deepfakes have been the cause of concern for a multitude of reasons:

  • The ease through which anyone can create deepfakes, and with elections around India and other parts of the world, deepfakes can be used to manipulate political speeches, and actions to change the election dynamics.
  • Time and again, online platforms have posed a challenge to women’s security. Deepfake technology is one other way that can be used to harass them.
  • Why are deepfakes a problem?
  • While the world has already seen two wars (Russia-Ukraine and Israel-Hamas), it can be used to polarize the environment, in favor or against a community.
  • We believed things only until we saw them. But with deepfake, this rule of thumb seems to be fading too. It really becomes difficult as to what can be trusted and what cannot be.

Would we be able to differentiate between a real and a deepfake? Can we do that? Are there any tech resources that can be helpful? Let’s find out.

How to identify deepfakes?

With a sharp eye and some focus, there definitely are ways through which one can differentiate between a deepfake and a real video.

  1. Observe what you see- Since deepfakes are created by a machine, there can be some unnatural eye movements, that might not match with the speech or the tone or might seem robotic, the eye movements might not be smooth.

  2. Audio- Similarly, the audio might not be very understandable, the words might not match the mouth movement, or might not coordinate with eye movement/ facial expression here.

  3. Awkward body movement- Limbs might be too long or short, and body movement might appear unnatural and out of sync.

  4. Facial expressions- Look for those unnatural movements in the lips, the neck, or the face altogether. Since part of the audio or video is overlapped with an artificial audio/video, there might be discrepancies.

  5. Morgan freeman deepfake
  6. Lighting and skin tone mismatch- Although deepfake creators try to replicate exact colors, there might be a mismatch in the skin tone or the overall lighting to help them cover up irregularities. The Indian Express, in fact, found that ‘especially when wearing jewelry, a person in the deepfake might look unusual due to lighting effects’.

  7. Stay up-to-date- Keep yourself updated with the latest happenings in and around the country so that you don’t fall into the deepfake trap.

  8. Care before you share- Try taking some care in knowing the source of any audio/video, and only then share.

  9. Use AI against AI- There are AI voice detectors like, although they might not be free but can definitely help against deepfakes.

  10. Other technological solution- One of the news articles explained- ‘apart from the above observations, you can also take a screenshot of the video and run a reverse image search to check the source and the original video. To do this, go to and click on the camera icon that says ‘Search by image’. You can then upload the screenshot and Google will show you if visuals associated with it are taken from previous videos’.

    Prompt Adaption

    We however would need more than individual action against such misinformation that has the potential to polarize societies, cause trust deficit, and spread like wildfire.

What have countries done against deepfakes?

Global measures: We all heard about the recent Bletchley Park show, the first-ever AI Safety Summit. Major countries like the US, the UK, China, Japan, India, and the EU were among the 28 countries that participated and agreed by signing a declaration that there needs to be a global action against AI risks.

Various news articles stated that ‘The declaration acknowledges the substantial risks from potential intentional misuse or unintended issues of control of frontier AI—especially cybersecurity, biotechnology, and disinformation risks.’

This Summit is a symbol of the acknowledgment of the problem and aggregated efforts that nations of the world are willing to put forward. In addition to this, there are individual countries that have taken steps in this direction.

The US: An executive order was rolled out by US President Joe Biden, against the challenges posed by AI. It mandates the companies to first share the results of their new products with the Federal Government before opening it up for the users at large.

Along the lines of the EU AI Act, the US government has proposed the Deep Fakes Accountability Bill, 2023, which is currently pending in Congress.

The EU: The European Union already has an act safeguarding against AI risks, the EU AI Act. Article 52(3) of the act ‘ensures advertisers either do not employ deepfakes or mandatorily disclose that advertisements have used deepfake content. It shall also regulate online platforms to prepare mandatory mechanisms to tackle deepfake advertisements.’

India: The Indian representative at Bletchley Park rightly said: “deepfakes are the latest and even more dangerous and damaging form of misinformation and need to be dealt with by (online) platforms”. Just before the G20 Summit, Prime Minister Narendra Modi had called for a global framework for the expansion of “ethical” AI tools.

India is looking to invoke a law that would require WhatsApp to share the details or identity of the first originator of a message under the IT Rules 2021. Although we require the harmonization of laws in the form of a single regulation (like the EU’s AI Act), the Digital Personal Data Protection Act, 2023 [S 6) (DPDP), Information Technology Act, 2000 (S 66C, 66D, 79 and 66E] and IT Rules, 2021 (S. 4(2)) and the Consumer Protection Act, 2019 might be invoked, for now, against AI-led concerns.

While countries are formulating measures, tech giants are not far behind, in tackling deepfakes.

What have Tech Giants responded with?

GOOGLE announced tools- ’which rely on watermarking and metadata- to identify synthetically generated content’.

Metadata allows content creators to associate additional context with original files, giving you more information whenever you encounter an image. We’ll ensure every one of our AI-generated images has that metadata,” Google CEO Sundar Pichai wrote in a blog post. Watermarking is typically used to identify ownership of the copyright of a signal (audio/video/image).

In its attempt against deepfakes, Facebook/Meta announced a Deepfake Detection Challenge to spread awareness and capture talent in this direction.

Operation Minerva on its website says ‘Our technology uses digital fingerprinting to identify deepfake videos and/or revenge porn videos on many of the most popular adult video sharing sites.

YouTube’s recent guidelines on a crackdown on AI-generated deepfakes are a welcome step! They said that “YouTube content creators who won’t disclose the fact that their videos are AI-generated will be suspended from the platform’s Partner Program while other penalties will also be levied.” They further stated that they will discuss these guidelines with creators before launching them so that “they understand these new requirements”.

Prime Responsibility

While countries take a leap towards legislating laws to tackle deepfake, it becomes our prime responsibility to keep ourselves aware and alert. Ensure from your side, that you verify the source of any content before sharing or believing it. Be a human firewall. AI might be intelligent but it’s not always right.

Frequently Asked Questions

Frequently Asked Questions

Reaching higher accuracy levels, deepfake technology can be a threat to sensitive information like your personal photos, legal documents, and so on.
The increasing number of deepfake videos and user-friendly applications imply that making deepfake content is not very difficult now, you don't need to learn programming or Machine Learning.
In general, creating, and using deepfake technology is not a crime. However, using this technology in scams, harassment, and for other criminal motives, can be categorized as a crime. Just like using social media is not a crime, using social media for criminal purposes is a crime.
Deepfakes can be used in advertisements, movies, education, etc.

Stay in the Know

Get ahead with TechUp Labs' productivity tips & latest tech trend resources