TFGBV Taxonomy
Abuse Type:

Inappropriate content

Last Updated 8/4/25
Definition: Deliberately exposing targets to disturbing, graphic, or psychologically harmful content as a method of control, manipulation, or intimidation.
Super Type:
Online harassment
Perpetrators:
Informal group Stranger Stranger
Perpetrator Intents:
Punitive intent Entertainment
Targets:
Organization, group, community Society Private individual Public figure
Impact Types:
Abuse normalization Psychological & emotional harm
Synonyms:
Extreme content, Borderline content

This abuse manifests when perpetrators intentionally share graphic violence, pornography, self-harm content, or other disturbing material with targets to cause psychological distress, desensitize them to abuse, or create trauma responses. Common tactics include sending unsolicited violent imagery, sharing content depicting sexual violence to normalize abuse, or exposing vulnerable individuals to content that triggers existing trauma. The World Economic Forum (2023) identifies this as harm occurring through content consumption, where individuals are "negatively affected as a result of viewing illegal, age-inappropriate, potentially dangerous or misleading content."

The harm occurs when individuals encounter this content without adequate preparation or when it's used to isolate, manipulate, or groom vulnerable individuals. Perpetrators may share such content directly with targets or create environments where targets are likely to encounter it.

Skill level

Low - Creating or sharing inappropriate content requires minimal technical skills, as perpetrators can easily find, save, and redistribute existing harmful content from various online sources.

Cultural variation

Regional legal frameworks influence how inappropriate content is defined and regulated, leading to inconsistencies in protections across jurisdictions.

References

  • Australian eSafety Commissioner. (2022, February 14). Parental awareness of children’s exposure to risks online. ESafety Commissioner. https://www.esafety.gov.au/research/mind-gap-parental-awareness-childrens-exposure-risks-online/parental-awareness
  • Centre for International Governance Innovation (CIGI), Canada. (2023, March 1). Final Technical Report: Supporting a Safer Internet: Global Survey of Gender Based Violence Online. IDRC Digital Library. https://idl-bnc-idrc.dspacedirect.org/server/api/core/bitstreams/4b2265e1-f259-49b5-8301-b35a5e02aa69/content
  • Lee, E., Schulz, P., & Lee, H. E. (2023). The Impact of COVID-19 and Exposure to Violent Media Content on Cyber Violence Victimization among Adolescents in South Korea: National Population-Based Study (Preprint). JMIR. Journal of Medical Internet Research/Journal of Medical Internet Research, 26:e45563. https://doi.org/10.2196/45563
  • Tech Coalition. (2025). Assessing Online Child Sexual Exploitation and Abuse (OCSEA) Harms in Product Development. Tech Coalition. https://www.technologycoalition.org/knowledge-hub/assessing-ocsea-harms-in-product-development
  • World Economic Forum. (2023, August 4). Toolkit for Digital Safety Design Interventions and Innovations: Typology of Online Harms. World Economic Forum. https://www.weforum.org/publications/toolkit-for-digital-safety-design-interventions-and-innovations-typology-of-online-harms/

AI Risks and Opportunities

Risks

Generative AI systems can produce realistic harmful content with minimal technical skill, lowering barriers to creating potentially traumatizing material. AI-powered recommendation systems may also inadvertently promote inappropriate content, particularly when algorithms prioritize engagement over safety (World Economic Forum, 2023).

Opportunities

AI detection tools can help identify and moderate inappropriate content before users encounter it. Machine learning classifiers can be developed to recognize potentially harmful material, assisting platforms in proactive content moderation (Tech Coalition, 2023).

Prevalence

  • 62% of teens 14-17 in Australia reported exposure to inappropriate or harmful content, including gory or violent material, unhealthy eating, Hate messages, Drug taking, Self-harm, Ways to take their own life, and Violent sexual images or videos (eSafety, 2022)
  • In a South Korean study, 67% of children 10-18 years old reported that they had been exposed to violent media content at least once or twice per year (Lee, 2023)
  • In a survey of 18 countries primarily in Asia and South America, 28.1% of respondents had unwanted sexual images sent to them (CIGI, 2023).

Mitigation Strategies

Real-time prompts for reconsideration
Nudging users to reconsider harmful behavior.
User-controlled content filters
Filters to Empower Users in Managing Content Exposure
Update ranking model
Move Away from Engagement-Based Content Ranking.
Quarantine borderline content
Implement Quarantine Systems for Gray-Area Content.
Rate limits on low trust accounts
Rate Limits on Interactions from New or Unverified Accounts.
Is something missing, or could it be better?
About This SiteGet InvolvedContactCopyrightPrivacy
Loading...