Deepfakes: A Feast of Lies Devouring the Future of Truth

Deepfakes: A Feast of Lies Devouring the Future of Truth

Think of a world where you can have a video call with a long-deceased political figure, watch a movie starring your favorite actor who never actually filmed those scenes, or hear a famous singer perform a song they never recorded. Intriguing and unsettling it may sound. But it is true and no longer science fiction. Welcome to the world deepfake AI.

Deciphering Deepfakes’ impact in the global scenario  

Multimodal AI refers to artificial intelligence systems that can process and generate content across multiple modes of information, such as text, images, videos, and audio. These systems use advanced algorithms and learning techniques to understand and create content in various formats. In short, these AI applications can combine images they have learned to generate dynamic video content.

Deepfake AI, short for deep learning-based fake artificial intelligence, leverages advanced machine learning algorithms, particularly deep neural networks, to generate or alter content that appears real but is actually fabricated. The term "deepfake" combines "deep learning" and "fake." This technology is in use since 2017 for producing realistic images, videos, and audio recordings by mimicking human behavior.

“We are witnessing extraordinary challenges with regards to trust in media. As digital platforms on Internet amplify the reach and influence of certain content via ever more complex and opaque algorithms, mis-attributed and mis-contextualized content spreads quickly. Whether inadvertent misinformation or deliberate deception via disinformation, inauthentic content is on the rise.,” explains Subroto Panda, Chief Information Officer of a renowned law firm Anand And Anand.

In the India’s Lok Sabha elections, a deepfake of Manoj Tiwari, a politician from the Bharatiya Janata Party (BJP), went viral on WhatsApp. The video, originally in English, was manipulated to show Tiwari speaking in Haryanvi, a local dialect, to target specific voters. This incident marked one of the earliest known use of deepfakes in a political campaign in India. “Someone with a malicious intent can use deepfakes to tarnish the image of a business or political party by creating fake content that appears to be authentic. For instance, a political party could be targeted by creating deepfakes that show their leaders engaged in questionable behaviour, such as accepting bribes or making scandalous statements,” warns Vivek Surve, Political Strategist and Co- Founder, Oreo One Digital.

However, the misuse of deepfakes extends far beyond elections. In India, a deepfake video featuring a popular actress morphed onto another person's body spread like wildfire on social media, highlighting the potential for privacy violations and reputational damage. “Deepfakes are becoming a big menace for business, organisation and personalities impacting not only reputational brand value but can lead to misinformation thereby harming business interests,” cautions Subroto Panda.

Globally, deepfakes have also been used for malicious purposes. A deepfake video of former US President Barack Obama delivering a fabricated speech went viral, raising concerns about the spread of misinformation.  In another instance, a deepfake of Facebook CEO Mark Zuckerberg making false claims sparked worries about manipulating public trust in institutions.  The recent controversy surrounding Hollywood actor Scarlett Johansson accusing OpenAI of using her voice without consent for their AI model "Sky" further emphasizes the potential for deepfakes to exploit identities. Taking note of this, we are initiating a cover story to dive deep into the impact of deepfakes on politics and businesses. 

“Recently, a multinational company in Hong Kong lost US$25.6 million after an employee was tricked by a deepfake video of the CFO ordering money transfers. Many businesses rely on biometric-based authentication systems, such as facial or voice recognition, to verify employee identities. However, deepfake technology has advanced to the point where it can deceive these systems,” says Sharda Tickoo, Country Manager for India & SAARC at Trend Micro, underlining the limitations of biometric-based authentication which many organizations rely on.

Trust is worth a dime

In a digitally-connected world, deepfakes are turning out to be a huge business opportunity where it is probable to find proponents of this technology defending it in the name of creativity and ethical use. However, there is no guarantee that, in an untamed world, its positives will outweigh the negatives. According to the World Economic Forum, disinformation remains a top global risk for 2024, with deepfakes as one of the most worrying uses of AI. “Today trust is a key fulcrum of business, brands, society and even political systems, which allows these systems to operate effectively. This trust gets built over transparency and reliability of information of participants of these systems – customers, voters, regulators etc. Deepfakes strike at the heart of these systems,” says Sushant Rabra, Partner, Digital Strategy, KPMG in India.

Deepfakes can render trust and privacy meaningless if not actively countered. That said, for many rogue individuals and organizations, deepfake is a powerful tool to manipulate users and cash in on the AI-generated content. Testifying to this is a market study published by Spherical Insights & Consulting that predicts the Global Deepfake AI Market Size to be at $119.34 Billion by 2033, at a Compound Annual Growth Rate (CAGR) of 33.12%. This doesn’t end here. The actual estimate might be a lot bigger than this. As per Gartner, deepfakes — AI-generated replicas of a person's likeness — could shatter confidence in face biometric authentication solutions for 30% of companies by 2026. If it happens, it would clearly leave the trust as a thing of the past.

At greater risk lies your brand equity

Deepfakes pose significant risks to businesses. They can enable financial fraud by creating convincing videos or audio of executives, leading to substantial losses. Additionally, they can be used for stock market manipulation, such as showing a finance expert giving advice on specific stocks. For instance, last year, Zerodha reported an incident where a customer narrowly avoided a Rs 1.80 lakh fraud attempt facilitated by deepfakes. “Manipulated videos or audio can spread false information, damaging the reputation of brands, products, leaders, or celebrities. This misinformation can lead to decreased consumer trust and sales. Moreover, deepfakes can be employed to fabricate misleading product demonstrations or reviews, further impacting consumer confidence and damaging businesses' reputations,” says Govind Rammurthy, CEO and Managing Director of cyber security company eScan.

In such an environment, financial fraudsters often exploit deepfakes to mimic the voices of our colleagues. They use this deception to request money, claiming urgent need. “We have seen instances where sophisticated deepfake techniques were used to impersonate a C-suite executive, resulting in financial losses. Another thing that affects the brand or the product is disinformation. Some deepfakes can mislead customers by sharing false information about the product or the brand,” warns Vinay Shetty, Regional Director, Component Business, ASUS (India & South Asia).

Bell the cat is NOW!

It is critical to have robust detection methods, stricter regulations, and a heightened awareness of the dangers posed by this technology. However, the larger question lies in making governments and regulatory bodies that deepfakes is a potential threat, which requires adequate rules and regulations. In the absence of a stringent regulatory framework and technological infrastructure that is capable of punishing rogue actors, it is highly likely that deepfakes will be harnessed for all the wrong reasons.

According to a statement by Ashwini Vaishnaw, Union Minister of Electronics and Information Technology, the government is considering imposing penalties on the creators of deepfakes as well as the platforms hosting them as part of new regulations to curb the menace. That said, the Ministry of Electronics and Information Technology (MeitY) is currently working on a draft regulations to address the menace of deepfakes. MeitY has emphasized the need for a clear and actionable plan that is expected to focus on four key pillars:

  • Detection: Developing tools and techniques to identify deepfakes.

  • Prevention: Measures to discourage the creation and spread of malicious deepfakes.

  • Grievance Redressal: Establishing mechanisms for individuals to report and seek recourse against deepfakes used to harm their reputation.

  • Public Awareness: Educating the public on how to identify and avoid deepfakes.

While specific deepfake laws are awaited, in India, there is no choice but to leverage existing legal frameworks like the Information Technology Act (2000) to penalize the creation and distribution of deepfakes used for malicious purposes.

Even at a global level, there is no clear benchmark to follow as far as laws against deepfakes are concerned. Some countries have taken initial steps. For example, USA’s California state has enacted a law requiring deepfakes used in political campaigns to be labelled as such. In France, there is a law against the dissemination of false information that could damage someone's reputation. This could be applied to deepfakes.

Understanding the urgent need for self-regulations by organizations and industry will significantly bolster the efforts to contain the impact of ill-intended deepfakes. Examples have been set by social media platforms like Facebook and YouTube that have implemented policies against misleading content. However, these policies might not be specific enough to address all deepfakes effectively.

“If proper regulatory mechanism and international cooperation for handling deepfakes is not evolved early, the menace is going to increase exponentially in the future. The present IT Act and its provisions under section 67 (A) and (B) are inept to handle artificial morphed content as they are reactive in nature so post harm actions can only be taken,” emphasizes Subroto Panda. Many of the panellists who participated in this feature also agree that a mass scale education and awareness program need to be initiated for targeted age group in regional languages. To educate internet user for precautions with regards to using and uploading content on the internet.

“Social media and content-sharing platforms should include IP and other source-of-content metadata in video, audio, and image files to enhance traceability,” adds Govind Rammurthy. He also insists that it is imperative to promote ethical standards in media production and consumption, including transparency in content creation and responsible use of AI.

Active collaboration between technology companies, content distributors, cybersecurity experts, and policy makers, to develop comprehensive strategies for combating deepfake threats will have far reaching results. One such is the Coalition for Content Provenance and Authenticity (C2PA) aims to combat misleading information by establishing technical standards for verifying the origin and history (provenance) of digital media content. Led by Adobe, Microsoft and BBC, C2PA represents a collaborative effort within the tech industry to develop technical tools that can help combat the spread of misleading information online. OpenAI and Google have also joined this and agreed to mark the videos generated using their AI platforms

Limitations of deepfake detection tech

Deepfake detection faces an uphill battle, given the pace it is evolving at. It is outpacing detection algorithms in a perpetual game of cat and mouse.  Second, the growing ease of use of these tools empowers even non-experts to create believable fakes, further muddying the waters.  Finally, the ever-increasing quality of deepfakes blurs the line between real and fabricated, posing a challenge for both automated and human analysis.

“The lack of standardized benchmarks and evaluation metrics makes it challenging to objectively compare different deepfake detection methods, hindering progress and the adoption of best practices,” says Sharda Tickoo. She underscores that as deepfake technology evolves, new forms such as audio, text-based, and multimodal deepfakes continue to emerge, necessitating constant adaptation in detection technology to counter these evolving threats.

Adding to the woes is the limited availability of training data that reduces the effectiveness of detection algorithms, especially for less common subjects.

“The need for large, high-quality datasets to train detection models is a significant obstacle, as collecting and labelling such data is a time-consuming and costly process,” adds Vivek Surve.

An existential threat at best

Malicious actors know the pulse of today’s generation that spends most of their time watching Instagram reels and Youtube shorts, referring to them as their first source of information. This mounting trust on quick shorts makes deepfakes a huge existential threat.

Deepfakes aren't just a technological marvel; they're a ticking time bomb for online platforms and social media. Every day, we see the devastating consequences:

  • Mass Deception: People are falling victim to elaborate scams, losing hard-earned savings due to cleverly manipulated videos and audios.

  • Preying on the Vulnerable: Deepfakes become weapons, targeting seniors with threats of fake debts or loan sharks tricking users with seemingly legitimate apps.

Increasing dependence on online platforms for communication and interaction takes a scary turn when we see the line between truth and fiction blurring. “If, with just simple but socially-engineered fakes, ordinary people can be scammed, just think of how deepfakes can affect the trust of common citizens!  It is indeed very very scary,” Govind Rammurthy sounds the alarm.

Most experts are of the opinion that deepfakes will result in an entirely unreal world, where the perception of reality will be sufficient to gain trust. “The consequences are dire, as people may struggle to distinguish between what's real and what's fabricated, leading to a loss of trust in the digital landscape,” agrees Vivek Surve.

What next?

Coordinated efforts coupled with stringent laws, regulations and appropriate frameworls are the only way to harness the positives of deepfakes. However, our optimism is grounded on the fact that there are going to be powerful detection methods and regulatory frameworks to prevent the misuse of deepfakes in the near future. “The lines between reality and fiction will continue to blur, forcing us to re-examine our understanding of truth and authenticity in the digital age,” concludes Vivek Surve, hinting at a fierce battle that has just begun.

How to Distinguish between a Deepfake and Real Video

Spotting a deepfake can be tricky, but there are some red flags to watch out for that can help you distinguish between a real video and a manipulated one.

Visual Inconsistencies:

  • Unnatural blinking patterns

  • Blurring or inconsistencies around the face

  • Lighting inconsistencies

  • Unusual facial expressions

Audio Inconsistencies:

  • Lip-syncing mismatches

  • Voice inconsistencies

Content and Context:

  • Does it seem too good to be true?

  • Is the source reliable?

Additional Tips:

  • Do a reverse image search: You can use online tools like Google Images or TinEye to see if the person or scene in the video appears elsewhere online.

  • Check for fact-checking articles: If the video is related to a newsworthy event, there might be fact-checking articles debunking it as a deepfake.

  • Be skeptical and slow to share: Don't rush to share a video that seems suspicious. Take the time to examine it for red flags and consider the source before hitting that share button.

Remember: Deepfakes are constantly evolving, so these are not foolproof methods. However, by being aware of these techniques and remaining vigilant, you can increase your chances of spotting a deepfake and avoiding misinformation.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.
logo
DIGITAL TERMINAL
digitalterminal.in