Understanding and Countering Hate through Computational Analysis
This project investigates the many approaches to identifying, studying, and intervening in hateful activities online relying on identifying signals related to hatred and verbal aggression from texts. However, existing tools are limited and often do not address evolving language. For example, effective implementation of content moderation policies on social media platforms can be difficult due to use of constantly evolving coded language in extremist hate group messages, language whose hatefulness depends on context and the background of the speaker, and rapidly changing online communities. As a result, content that should be removed according to a platform’s policy may be missed entirely or removed only after a significant delay. Furthermore, besides hate speech detection, there is a need for early indicators for and techniques to stop the creation of hate communities and to prevent activities that lead to their formation. This working group is developing collaborative research projects that leverage computational analysis of large-scale data sets in conjunction with expertise from linguists, sociologists, psychologists, political scientists, anthropologists, and others to help understand and counter the formation of hate communities and moderate hateful content.
Working group co-chairs:
Professor, Institute for Software Research, Carnegie Mellon University
Associate Professor, School of Computing and Information, University of Pittsburgh
Group Members:
Kathleen Carley, Yu-Ru Lin