Found 370 repositories(showing 30)
HimariO
No description available
drivendataorg
No description available
rizavelioglu
[NeurIPS'20-Competition] Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. https://arxiv.org/abs/2012.12975
gokulkarthik
Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Workshop
DISHASONI99
This project addresses the growing challenge of detecting hateful memes across multiple languages on social media platforms. Hateful memes combine text and images to convey offensive messages that target individuals or groups based on characteristics such as race, gender, ethnicity, and religion.
JingbiaoMei
📄 ACL 2024: RGCL, Retrieval-Guided Contrastive Learning for Hateful Meme Detection 📄 EMNLP 2025 (Oral): RA-HMD, Robust Adaptation of Large Multimodal Models for Retrieval-Augmented Hateful Meme Detection Official implementation with pretrained models and reproduction scripts.
miccunifi
[ICCVW 2023] - Mapping Memes to Words for Multimodal Hateful Meme Classification
facebookresearch
Fine grained annotations extending hateful memes dataset with additional labels for identifying protected categories and attack types.
Nithin-Holla
Repository containing code from team Kingsterdam for the Hateful Memes Challenge
Social-AI-Studio
Dataset and code implementation for the paper "Decoding the Underlying Meaning of Multimodal Hateful Memes" (IJCAI'23).
inFaaa
[COLING 2025🔥] Evolver: Chain-of-Evolution Prompting to Boost Large Multimodal Models for Hateful Meme Detection
faizanahemad
Facebook hateful memes challenge using multi-modal learning. More info about it here: https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set
apsdehal
The Hateful Memes Challenge example code using MMF
Social-AI-Studio
Official repository for ACM Multimedia'23 paper "Pro-Cap: Leveraging a Frozen Vision-Language Model for Hateful Meme Detection"
Ikea-179
A Multimodal Deep-Learning-based Project aimed to classify whether the given meme is hateful or not
eftekhar-hossain
[ACL, EACL'24] Dataset and PyTorch Code for Multimodal Hate Speech Detection in Bengali
Social-AI-Studio
Official repository for WWW'24 paper "Modularized Networks for Few-shot Hateful Meme Detection"
Abhishek0697
11-777: Multimodal Machine Learning Course Project Repository
harjeet-blue
Multimodal (Visuals & Language) hateful memes detection: on Hateful memes challenge dataset
priya-dwivedi
No description available
aryamansriram
No description available
gautham-balraj
No description available
vasilikikou
MemeGraphs: Linking memes to knowledge graphs for hateful memes classification
VAIBHAV-2303
Social Computing Project
DeepNeuralAI
Hateful Meme Detection By Leveraging SOTA Visuo-Linguistic Models
czh4
No description available
hbujakow
Joint work on utilizing a combination of NLP and CV methods in implementing multimodal approaches to combat hate speech.
blackhat-coder21
This project addresses the growing challenge of detecting hateful memes across multiple languages on social media platforms. Hateful memes combine text and images to convey offensive messages that target individuals or groups based on characteristics such as race, gender, ethnicity, and religion.
yangland
Enhance Multimodal Model Performance with Data Augmentation: Facebook Hateful Meme Challenge Solution
namitashukla
Hateful meme detection is a well-known research area that requires both visual and lin- guistic understanding. It matters because in today’s world information and opinions stem from multimedia. With people smartly disguising hateful intent behind apparently harmless images/text which when combined within cultural and societal context can hurt sentiments of various minority groups. Thus, there is a dire need to be able to detect such hateful multimedia in a multimodal setting. For this purpose, we have used Facebook’s hate meme detection data set specially anno- tated such that the unimodal priors are bound to fail, that is, the images and text individually don’t hold much signal. We have used ResNext and RoBERTa unimodal models as the base- lines. In order to explore the multimodality of the dataset, we used the early fusion approach by concatenating the ResNext embeddings of pure images (2047 dimensional) and RoBERTa embeddings of text (768 dimensional) and then subsequently performing classification using various fine-tuned models such as Shallow Feed Forward Network, Deep Feed Forward Net- work, CatBoost, LGBM, XGBoost and Logistic Regression.