Recent advancements in AI-generated deepfakes and voice cloning have raised concerns about their potential impact on elections not only in India but also around the world. These technologies, capable of producing convincing yet entirely fabricated videos and audio recordings, have the potential to mislead voters and manipulate election outcomes.
A deepfake refers to a type of synthetic media created using artificial intelligence techniques, particularly deep learning algorithms, to manipulate or generate visual and audio content that appears convincingly real but is entirely fabricated.
These sophisticated algorithms analyse and synthesise existing images, videos, or audio recordings to seamlessly superimpose or replace the likeness and voice of individuals onto different bodies or contexts. The result is often highly deceptive content that can be used for various purposes, including spreading misinformation, impersonation, and creating falsified evidence.
Deepfakes have garnered significant attention due to their potential to deceive viewers and manipulate public opinion, posing serious challenges to the authenticity and integrity of digital media and communication. In India, where online news consumption is prevalent amongst more than half of Internet users, the vulnerability to AI-generated misinformation is particularly high.
The 2019 elections were marred by hate speech and disinformation campaigns, but the rapid evolution of deepfake technology has exacerbated these concerns. A Kantar-Google report highlights the susceptibility of Indian internet users to AI-generated misinformation. The emergence of deepfake content targeting politicians and political parties on social media platforms further underscores the gravity of the situation.
India ranks as the sixth most vulnerable country to deepfake threats, with politicians and celebrities being prime targets due to the abundance of publicly available visual and audio data. The elections in India this year have seen AI-generated deepfakes and voice cloning, which can spread rapidly across various languages and potentially influence millions of voters within hours. Leading this technological wave is Divyendra Singh Jadoun, known as the “Indian Deepfaker,” who offers his AI-powered skills to politicians for various purposes, including swaying votes and attacking opponents.
While efforts are being made to combat the dissemination of objectionable content, including assigning the Indian Cyber Crime Coordination Centre (I4C) as the nodal agency and deploying state police cybercrime units to patrol online platforms, experts express scepticism about the effectiveness of such measures. The absence of specific regulations governing deepfakes in India and the accessibility of deepfake technology exacerbate the challenge of containing their spread.
ALSO READ: Could AI bring Marilyn Monroe and John Wayne back to our screens?
To address these threats effectively, security agencies advocate for the establishment of specialised teams equipped with advanced tools to detect and analyse AI-generated content. Moreover, public awareness campaigns are essential to educate individuals about the risks associated with AI-generated content and empower them to identify and report such materials. Collaboration between law enforcement agencies and social media platforms is also crucial in detecting and removing malicious content.
AI-generated deepfakes and voice cloning represent significant challenges to the integrity of elections in India and beyond. While regulatory efforts and collaborative initiatives are underway to mitigate these threats, the evolving nature of deepfake technology necessitates continuous technological advancements and vigilance to safeguard the democratic process.
Deepfake Statistics Highlight Growing Concerns and Challenges
1. Global Awareness Gap: A study conducted by iProov reveals a significant lack of awareness regarding deepfakes globally, with 71% of respondents admitting they do not know what a deepfake is, while just under a third claim awareness of the phenomenon.
2. Rapid Increase in Deepfake Creation: The proliferation of deepfake videos on the internet has surged, doubling since 2018 and reaching a staggering 14,678 videos in 2021. Furthermore, deepfake technology continues to advance, evidenced by an 84% increase in deepfake creation models from 2019 to 2020.
3. Detection Challenge: Detecting deepfakes remains a formidable challenge, as evidenced by a study indicating that one in four people are unable to distinguish between a deepfake and authentic audio samples, posing serious concerns, particularly in industries prioritising data security.
4. Growing Threat of Deepfake-Based Fraud: Deepfakes are increasingly exploited in cybercrime, with reported deepfake-based fraud attempts soaring by 300% in 2020. These malicious tactics extend to social engineering attacks, exploiting individuals into divulging sensitive information or transferring funds to fraudulent accounts.
5. Legal and Ethical Implications: The widespread use of deepfakes raises profound legal and ethical dilemmas, including issues surrounding digital content authenticity, privacy rights, consent, and the ethical ramifications of manipulating digital media.
6. Global Increase in Deepfake Attacks: Sumsub’s findings indicate a tenfold surge in detected deepfake incidents worldwide across all industries from 2022 to 2023, underscoring the escalating threat posed by this technology.
7. Emerging AI Regulations: Recognising the risks associated with AI-generated content, regulatory efforts are underway globally, with China leading in deepfake regulation. Key jurisdictions such as the EU, UK, and US are also making strides to address deepfake-related challenges.
8. Industries Impacted by Identity Fraud: Various sectors experience the detrimental effects of identity fraud, with online media, professional services, healthcare, transportation, and video gaming emerging as the top five industries affected. Notably, online media witnessed a substantial 274% surge in identity fraud between 2021 and 2023.
9. Preventative Measures: Combatting identity fraud requires organisations to implement stringent measures, including mandatory identification protocols. Sumsub’s Identity Fraud Report offers insights into AI-powered fraud prevention strategies and an overview of evolving AI regulations.
10. Impact on Security Operations: A survey of over 650 senior security operations professionals in the U.S. reveals that 55% of respondents experience heightened stress due to generative AI. Limited staffing and resources for adopting new technologies, including generative AI, contribute significantly to this stress.
These statistics underscore the pressing need for robust countermeasures and regulatory frameworks to address the escalating threat posed by deepfake technology across various sectors, including the integrity of elections.
ALSO READ: Legal loopholes leave victims of sexualised deep fake abuse vulnerable