Generative AI Likely to Play A Significant Role In Creating Misleading or Fake Content: CrowdStrike

Generative AI Likely to Play A Significant Role In Creating Misleading or Fake Content: CrowdStrike

The use of generative AI in creating misleading or fake content is a growing concern. As generative AI becomes more sophisticated, there is an increased risk of malicious actors exploiting these capabilities to manipulate information, spread disinformation, or impersonate individuals. The potential for creating deceptive narratives poses a threat to information integrity and public trust. Balancing innovation with responsible AI practices is essential to mitigate the risks associated with the misuse of generative AI in content creation.

Putting across his views on the fake content creation with the help of AI models, Adam Meyers, Head of Counter Adversary Operations, CrowdStrike expressed serious concerns. He said, “CrowdStrike tracks the use of fabricated and manipulated content from nation-state, criminal, and hacktivist threat actors used to deceive individuals and groups of people for a variety of purposes. The barrier to entry for deepfake content continues to decline, thanks to the continuous advancement of Generative AI.  CrowdStrike assesses that threat actors will expand their use of generative AI tools in criminal, dis-information, and influence operations over the coming year.

He further explained the fear of unethical use of AI with major incidents around the world. He commented, “One recent example is from China-nexus actors accused of leveraging AI to manipulate video content of Taiwan’s presidential candidates two days before the election. Taiwan media sources reported that Chinese influence actors potentially were creating and distributing a large volume of AI-manipulated content across social media featuring Taiwan’s President Tsai Ing-wen and presidential candidate Lai Ching-te. The use of generative-AI text-to-speech language models and AI-generated avatars have reportedly led unnamed official sources to believe this information operation (IO) was perpetrated by Chinese influence actors. The use of multiple generative-AI language models in pro-China IOs has been a consistent trend in 2022 and 2023.”

“In late January 2024, ahead of the New Hampshire primary, an investigation was announced into suspected voter suppression attempts as potential voters received deepfake robocalls with a voice spoofing United States President Biden, which included verbal anecdotes commonly used by the President. And just this week two days before Slovakia’s elections, an audio recording was posted to social media impersonating the voice of Michal Simecka, who leads the progressive Slovakia party, discussing how to rig the election through buying votes. Both ongoing incidents indicate generative AI will likely play a significant role in creating misleading or fake content as part of mis- or disinformation efforts relating to various 2024 global elections,” he concluded.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.