is turning to the power of AI to help communities tackle online hate.
Assistant Professor Ishtiaque Ahmed is a 2023-2024 recipient of a Connaught Community Partnership Research Program Award. (Photo: Ryan Perez)
His research project, 鈥淢aking the Internet Safer through Community-Powered Artificial Intelligence,鈥 is the recipient of a 2023鈥2024 award from the Connaught Community Partnership Research Program.
The encourages collaborative research partnerships that will foster access to each other鈥檚 unique knowledge, expertise and capabilities on issues of shared interest.
In the proposed project, Ahmed will co-design, develop, deploy and evaluate a community-powered AI system to improve upon existing content moderation processes on social media and online discussion forums.
Ahmed, who is an assistant professor in U of T鈥檚 海角视频 of Computer Science and a faculty affiliate of the , will use the award to support partnerships with the and the (FFPF), two non-profit organizations, to better address online hate speech aimed at Chinese and Muslim communities in Canada.
He will be working alongside two U of T faculty members: , an assistant professor in the 海角视频 of Language Studies at U of T Scarborough and , an assistant professor in the Faculty of Information, cross-appointed to the 海角视频 of Computer Science.
The researchers will collect and analyze online posts that target Muslim and Chinese communities on Facebook, Twitter and Reddit, and interview community members to understand their experiences and perspectives on online hatred. They will use the gathered input to label posts as harmful or not harmful and create two open datasets for research and awareness.
Leveraging a deep-learning model, their proposed tool will take the form of keyboard software offering users an interface similar to the AI-based writing assistant Grammarly. As a user writes in their web browser, it will highlight sentences that may contain misinformation or hate speech and provide users with context that explains why the text as written could be considered harmful or offensive.
In the case of debated issues, instead of taking a specific side, the software tool will show all possible explanations. Additionally, users will be able to provide feedback to a team of human moderators recruited from within the respective communities. The moderators will see these challenges on a separate interface in a web portal, where they can provide the final verdict based on their community values. If the user鈥檚 contestation is valid, the tool will show that as a new interpretation of the text to the next user who writes the same or similar content.
This project will be carried out over a two-year timeline with software deployment scheduled for next summer. From there, the team will collect data and complete evaluations on both its technical functionality and how well it鈥檚 helping communities both within and outside of these communities, in the next academic year.
We spoke with Assistant Professor Ishtiaque Ahmed for details on the project and his hopes on how this approach can make the internet safer for marginalized communities.
What sparked your interest in researching how AI could help tackle harmful social media content?
The reason why we need AI is mostly because of the scale of the online posts. Right now, there are human moderators who check if a post is hateful or not, if it鈥檚 reported, and then they make the decisions, but this process doesn鈥檛 work that well for thousands and thousands of posts that are being reported every single day. Especially when there is a conflict somewhere, you will find a lot of hate speech coming up. Also, the knowledge of a human moderator is often bounded by their background, and they cannot make good judgment on posts coming from a different context. An intelligent system can help the human moderators do this more accurately and quickly. In fact, in many cases, human moderators are actually being helped by various AI tools. However, those AI tools are not trained with data from historically underrepresented groups. Hence, the results are often biased against those groups. This is why I felt the need to develop a responsible, fair, and accountable AI system to support the social media moderation system.
How did you stumble upon this specific area of research?
The main focus of my research is to support marginalized communities with the help of computing. One problem that I was facing while doing this research is that AI algorithms or computing in general often operate on logic of 鈥渕odern鈥 scientific data, and not on cultural knowledge from these communities. A lot of my work is in global south countries where they have different cultural norms, and there is a clear conflict between what they consider believable and what AI systems qualify as valid data. Their judgment and rationality are not being implemented in AI technologies, including the automated moderation systems. As a result, the existing tools are flagging the posts of those communities that are based on their traditional faith, religion, myth, and folklore. So, I felt we need to train an AI algorithm with something that is aligned with the community鈥檚 cultural values. In coming up with this model, we thought about a community-powered AI system, where the community values will be taught to the AI system so that it can train itself to help moderators to work better.
What is the goal of this project?
We are trying to build a software keyboard, similar to . But instead of highlighting sentences that are grammatically wrong, we are highlighting sentences that may contain misinformation or hate speech. Now, we don鈥檛 stop at the highlighting; you also need to tell people why it is problematic. These explanations are not always readily available on the internet. So oftentimes, we need to work with these communities to find out what they find offensive, so that we can tell people, 鈥淚f you, write this, it may hurt these communities in this way, maybe reframe this in this way鈥 or 鈥測ou may want to read this and give it a second thought before posting this.鈥
Right now, we are focusing on Islamophobia and Sinophobia, but if it works well, we鈥檙e happy to work with other communities and expand this tool for others. It鈥檚 a free tool that we鈥檙e building called 鈥楥ompassionately,鈥 and it will work as an add-on to a browser or a mobile phone application that helps you refine your text before you post it. Grammarly fixes your grammar and Compassionately tells you how you can be more compassionate.
Why is a partnership approach best suited for this work?
I have been working with these two non-profit organizations for more than three years now. They have a long history of handling hateful activities in real life. With social media and the internet, hate can scale and spread more easily. So, we understand their need for this automated tool. We also need them to tell us what is right or wrong in order for us to build this kind of community-powered AI. So, in that way, we believe it is helping both of us.
Our project is not purely technical. A lot of our work is very 鈥榮ocial sciencey,鈥 鈥 qualitative research where we collaborate closely with the community, we hear from them, we take their history, their memories, and bring that as a knowledge that we can feed to our AI system.
Why is it important in this project to involve historically marginalized groups in content moderation on social media?
Historically marginalized communities are very poorly represented in the computing world. Many of them do not use computers or the internet, so their data is not available. AI algorithms can only learn information that is available digitally. So, if they don鈥檛 know what could offend the particular community, at the language level, they cannot take an action based on that. If there is no one to report it, and the AI algorithm doesn鈥檛 understand whether it鈥檚 offensive or not, the internet becomes a toxic and unsafe space for them. So, our broader goal is to make the internet safer for everyone. It鈥檚 also aligned with a lot of the ongoing work to make AI technologies more ethical. I think our work on this project will contribute to that bigger initiative of ethical AI by making it more participatory, bringing in the voices of marginalized people whose voices weren鈥檛 represented there before.
How do you feel winning a Connaught Community Partnership Research Program award?
I鈥檓 super happy and excited, mostly because we have been working on this particular problem for a while now. This is the problem we found while working with the communities in the field. It came from the community and then we found that there is a fundamental problem in the way we do machine learning that should be addressed, and we cannot address this problem without the help of this community. It鈥檚 definitely exciting news for me and my team and the community partners that we鈥檙e going to work with.
This project wouldn鈥檛 be possible without the help of our different community partners, my graduate students and researchers in my . They are awesome. I would also like to mention the contribution of my U of T colleagues, Profs. Shion Guha and Shohini Bhattasali, and my external collaborators, Prof. Rumee Ahmed (UBC), Sarah Masud Preum (Dartmouth), and Daphne Ippolito (UPenn).
What does receiving this award mean to you?
Receiving this award means a lot. I see the impact of this award in two ways. First it will help me enormously to advance my research program, to strengthen my relationship with the community partners here in Canada. But I鈥檓 also excited about the real-life impact of this problem, because we are going to help these communities to handle online hate speech. To be able to make such an impact is most rewarding for me.
This interview has been edited for clarity and length.