Online harassment is an umbrella term that encompasses a wide range of behaviors via technology intended to silence or intimidate a target, many of which are presented elsewhere in this taxonomy.
It often includes repeated hostile messages, direct threats of violence, and offensive or abusive comments (including expression of sexism, racism, xenophobia, homophobia, transphobia or ableist prejudices). It may include coordinated pile-on attacks or sustained harassment campaigns by formal groups or informal groups intended to overwhelm the target.
Many other forms of TFGBV are often encompassed by online harassment, including cyberstalking, online impersonation, intimate image abuse, inappropriate content, and deceptive synthetic media.
The harassment often escalates across platforms and can include doxxing personal information to facilitate offline harm.
Some forms that do not have their own entries in this taxonomy include spamming, sending viruses, and hacking.
While anyone can be targeted by online harassment, women and gender minorities are disproportionately targeted.
Harassment patterns vary significantly across regions, with intersectional discrimination adding complexity.
Sharing a picture of a woman with a man who is not her husband or not wearing a hijab can be considered online harassment in some contexts.
Cultural contexts shape which topics or identities become targets, with women journalists in many regions facing higher rates of harassment for covering certain subjects.
Research by UNFPA shows that Black, Asian, Minority, Ethnic (BAME) LGBTQIA+ people in the UK experience TFGBV at twice the rate of white LGBTQIA+ people (20% vs 9%).
Low - requires minimal technical knowledge, primarily involving standard platform features like messaging, commenting, and account creation.
AI systems can amplify harassment through biased content recommendation algorithms that reward inflammatory content. UNESCO research demonstrates that large language models perpetuate gender bias, with some generating misogynistic content in 20% of instances.
The advancement of AI makes it easier than ever to create deceptive synthetic media, which can be a particularly harmful form of online harassment.
They also make the creation of bots easier, which is another form.
AI also offers detection opportunities through natural language processing to identify harassment patterns and automated content moderation systems.
AI could be used to support survivors of online harassment in requesting the content take down in cases when the online harassment involves images posted.