TFGBV Taxonomy
Abuse Type:

Deceptive synthetic media

Last Updated 6/12/25
Definition: Fabricating realistic representations of real individuals without their consent.
Super Type:
Intimate image abuse (IIA)
Perpetrators:
Personal connection Informal group Stranger
Perpetrator Intents:
Silence Entertainment Punitive intent Aggrandizement
Targets:
Society Private individual Public figure
Impact Types:
Infringement of rights & freedoms Sexual harm Social & political harm Economic harm
Synonyms:
Deepfake, Digitally morphed image, Face-swapping, AI-generated synthetic media, Shallowfake
Skill Required:Medium

Deceptive Synthetic Media refers to images, videos, or audio that has been altered or entirely created using artificial intelligence or other digital software to falsely depict real individuals, often with the goals to deceive, harass, or exploit targets. The creation of such deceptive synthetic media is done without the consent or knowledge of the individuals in question.

Deceptive synthetic media is frequently weaponized to harm an individual’s reputation and violate their autonomy. Women are disproportionately targeted through the creation of sexualized synthetic content, in which their likeness is manipulated or generated to depict them in explicit scenarios they never consented to or engaged in. When such content is distributed—particularly on public platforms—without the individual's knowledge or permission, it constitutes a form of intimate imagery abuse. This abuse not only distorts public perception of the victim but also causes lasting emotional, social, and professional harm.

Deceptive synthetic media is also increasingly used to advance political agendas by discrediting or undermining public figures. Politicians, activists, and journalists have been targeted through fabricated videos, audio clips, or images that falsely depict them engaging in unethical, illegal, or scandalous behavior (a form of online impersonation). Such media can be used to erode public trust, disrupt election campaigns, or stoke political polarization. The speed and realism with which this content can circulate—especially on social media—make it a powerful tool for misinformation and manipulation, with serious implications for democratic institutions and public discourse.

Cultural variation

While the technology is globally accessible, cultural attitudes toward women's rights and digital privacy influence how this abuse is perceived and addressed. Some regions may have stronger legal frameworks or social support systems, while others may normalize this abuse or lack enforcement mechanisms.

Skill level

Low - Previously required high technical skills, but generative AI tools have made creation accessible to general users with minimal technical knowledge.

References

  • Australian eSafety Commissioner. (2024, September). Technology, gendered violence and Safety by Design: An industry guide for addressing technology-facilitated gender-based violence through Safety by Design. Australian ESafety Commissioner. https://www.esafety.gov.au/sites/default/files/2024-09/SafetyByDesign-technology-facilitated-gender-based-violence-industry-guide.pdf
  • Humane Intelligence. (2025). Digital violence, real world harm: Evaluating survivor-centric tools for intimate image abuse in the age of gen AI.
  • NCA National Assessments Centre. (2025). National Strategic Assessment (NSA) 2025 - Overview of Serious and Organized Crime (SOC) . National Crime Agency. https://www.nationalcrimeagency.gov.uk/nsa-overview-of-soc-2025
  • Security Hero. (2023). 2023 State Of Deepfakes: Realities, Threats, And Impact. Security Hero. https://www.securityhero.io/state-of-deepfakes/#overview-of-current-state
  • Tenbarge, K. (2023, March 27). Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy. NBC News. https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071

AI Risks and Opportunities

Risks

The advancement of AI, especially in deep learning and generative models, has significantly increased the risks associated with deceptive synthetic media. Deepfake technology now enables the rapid and realistic generation of audio, video, and images that are often indistinguishable from authentic content. As a result, malicious actors can fabricate convincing material that depicts individuals—disproportionately women and marginalized groups—in sexual, criminal, or compromising situations they were never involved in.

One security company found explicit deepfakes increased fourfold from 2022 to 2023 (IIA Tools Landscape Analysis, 2025). This not only facilitates intimate imagery abuse and reputational harm, but also amplifies misinformation and disinformation with broader political, social, and economic consequences.



Prevalence

  • In the US, 16% of congresswomen have had non-consensual AI imagery generated of them (IIA Tools Landscape Analysis, 2025).
  • 11.4% of women in Türkiye have had photos of them manipulated and used for electronic defamation (UN Women, 2023a).
  • “Explicit deepfakes increased 464% from 2022 to 2023" (Security Hero, 2023).
  • "The most popular website dedicated to sexualised deepfakes gets about 17 million hits a month" (Tenbarge, 2023).

Mitigation Strategies

Update ranking model
Move Away from Engagement-Based Content Ranking.
Quarantine borderline content
Implement Quarantine Systems for Gray-Area Content.
Default to highest privacy settings
Default Privacy Settings to Minimize User Vulnerability.
Transparent feedback and reporting
Enhanced Feedback Mechanisms for Reporting and Transparency.
Prioritized reporting
Prioritize reports of TFGBV over reports of less time-sensitive harms.
Is something missing, or could it be better?
About This SiteGet InvolvedContactCopyrightPrivacy
Loading...