Found 16 repositories(showing 16)
namitashukla
Hateful meme detection is a well-known research area that requires both visual and lin- guistic understanding. It matters because in today’s world information and opinions stem from multimedia. With people smartly disguising hateful intent behind apparently harmless images/text which when combined within cultural and societal context can hurt sentiments of various minority groups. Thus, there is a dire need to be able to detect such hateful multimedia in a multimodal setting. For this purpose, we have used Facebook’s hate meme detection data set specially anno- tated such that the unimodal priors are bound to fail, that is, the images and text individually don’t hold much signal. We have used ResNext and RoBERTa unimodal models as the base- lines. In order to explore the multimodality of the dataset, we used the early fusion approach by concatenating the ResNext embeddings of pure images (2047 dimensional) and RoBERTa embeddings of text (768 dimensional) and then subsequently performing classification using various fine-tuned models such as Shallow Feed Forward Network, Deep Feed Forward Net- work, CatBoost, LGBM, XGBoost and Logistic Regression.
The multimodal hate speech detection system predicts if a comment or meme contains hate speech. It classifies text as targeted or untargeted and directed at individuals or groups. For images, it detects hate and categorizes them into blood and gore, NSFW, or smoking. This approach handles both text and images for accurate hate speech detection.
sml-schl
CRAVE-Bench is a synthetic multimodal benchmark dataset designed to evaluate cross-cultural bias in vision-language models (VLMs) for hateful meme detection. The benchmark addresses a critical gap in existing hate speech datasets: the lack of systematic representation of cases where cultural context fundamentally alters interpretation.
Manikantacb
Hate Detection in Memes using Multimodal Sentiment Analysis
AwesomeDeepAI
Multimodal Hate Speech Detection from Bengali Memes and Texts
susannapaoli
Testing different language models for multimodal memes hate speech detection
sebasyuste
Multimodal Hate Speech Detection for Spanish Memes using Qwen2-VL, BETO, and EfficientNetB0
We experimented with a few techniques like VisualBert, RoBERTa, and ViLBert, for multimodal hate meme detection.
pramanik-souvik
Multimodal meme hate detection using VLMs and Fusion Approaches, with training evaluation and carbon emission tracking via CodeCarbon.
sagahansson
Final project for the course LT2318 Artificial Intelligence: Cognitive Systems. The project concerns multimodal hate speech detection in memes.
Hapax-Legonemon
This repository has the experiments log from the papper -Parameter-Efficient Fine Tuning for Multimodal Hate Speech Detection in Memes
RashfiTabassum
This repo contains code for Subtask A of a shared task on multimodal hate speech detection. It focuses on identifying hate speech in text-embedded images (e.g., memes) using binary classification (Hate/No Hate), addressing challenges in online content moderation through multimodal learning.
chichi-1
This project implements and evaluates a multimodal hate meme detection system capable of classifying memes in English and indigenous Nigerian languages as hate/non-hate. It introduces a comparative framework between a baseline model (DistilBERT + ResNet) and a large-scale multimodal LLM (LLaVA), optimized using Particle Swarm Optimization (PSO).
MARafey
This repository contains a multimodal approach to detect hate speech in memes for the Hateful Memes Challenge created by Facebook AI. The implementation focuses on early and late fusion techniques to effectively combine text and image modalities for improving hate speech detection performance.
hridya0902
A text classification model for detecting abusive or offensive language in meme captions. This project focuses on analyzing textual content using NLP techniques to identify hate speech, harassment, and harmful expressions, forming the text-only component of multimodal meme detection systems.
juhibandekar12
A Multimodal Hate Speech Detection system that combines OCR-based text extraction and deep learning (BiLSTM + CNN) to classify memes as Offensive or Non-Offensive. The system automatically extracts text from images using EasyOCR and performs multimodal inference through a trained TensorFlow model.
All 16 repositories loaded