Found 25 repositories(showing 25)
Yasen03
A curated list of Affective Computing & Emotion AI: Papers, datasets, and toolkits for Multimodal Emotion Recognition, Emotional Reasoning, Multimodal Sentiment Analysis, and Empathetic LLMs/MLLMs.
Mai is an emotionally intelligent, voice-enabled AI assistant built with FastAPI, Together.ai LLMs, memory persistence via ChromaDB, and real-time sentiment analysis. Designed to feel alive, empathetic, and human-like, Mai blends the charm of a flirty cyberpunk companion with the power of modern multimodal AI.
dogaece-koca
A state-aware multimodal assistant integrating LLMs (Gemini-Flash) with custom Scikit-learn models for sentiment analysis and logistics forecasting.
A multimodal pipeline using Wav2Vec2 and LLaMA-3 to analyze emotion and sentiment in healthcare audio. Scaled with PySpark on Databricks for high-throughput clinical insight extraction.
BaoSir529
Code and dataset for paper LLM-based Knowledge Enhancement and Sentiment Filter Network for Multimodal Aspect-Based Sentiment Analysis
A multimodal AI-driven sentiment analysis platform integrating text and visual data using deep learning and LLMs. Designed for real-time market insights and business intelligence.
aakashbaheti
A multimodal Artificial Intelligence and Machine Learning RAG system supporting PDF, image, audio, and video ingestion with scene detection, sentiment analysis, LLM-based extraction, and hybrid search.
Krisanthi
EmotiChat is an emotion-aware AI chatbot built with Python and Streamlit. It performs real-time multimodal sentiment analysis by fusing facial expressions (AWS Rekognition/DeepFace), vocal tonality (Librosa), and text sentiment (HuggingFace) to dynamically adjust the personality and response tone of a Llama 3.3 LLM.
Paramsoni19
No description available
OPpandu
Multimodal Sentiment Analysis using LLMs
No description available
No description available
No description available
wesha-904
Multilingual Sentiment Analysis And Hate Speech Detection Using a Multimodal LLM
DeBroglie99
code for papar: FinMllm: a Retrieval-Enhanced Multimodal LLM Framework for Financial Sentiment Analysis
Kashika1610
Built a multimodal agent integrating LLMs, emotion detection, and market analysis for personalized budgeting, sentiment-aware advisory, and autonomous portfolio optimization. Technologies Used: Agentic AI, Transformers, Reinforcement Learning, LLMs, Financial APIs
Ryanic-Chang
An end-to-end multimodal public opinion monitoring system built on Ascend AIpro, integrating Qwen LLMs for text and vision-based sentiment, intent, and topic analysis.
ZhangZKon
A multimodal AI agent for New Energy Vehicle (NEV) brands. It detects EV leaders' faces, extracts text/logos, and performs sentiment analysis using BLIP-2 and LLMs.
jaswanthv99
Implemented advanced LLM applications using Ollama, leveraging CLI commands, REST API, Python integration, and Misty API to perform summarization, sentiment analysis, and image captioning with multimodal models.
SakethramSathish
This project is a Streamlit-based RAG system that evaluates startup ideas using multimodal retrieval, temporal evidence weighting, FAISS embeddings, sentiment analysis, and Gemini LLM reasoning with counterfactual checks. It delivers grounded, transparent, data-driven startup evaluations.
BootcampToProd
A Spring AI "cookbook" project demonstrating how to build an intelligent audio analysis API. This service uses a multimodal LLM to process audio files from various sources and extract rich insights like transcription, sentiment, and speaker details.
4nxh
Vibe-MAPS | Real-time Hyper-Local Sentiment & Safety Analysis. Utilizing Multimodal LLMs and Transformer-based NLP to map the "vibe" of city streets through social signals and live data feeds. Built with Next.js 14, FastAPI, and Mapbox.
This project focuses on developing an AI-driven multimodal emotion analysis system, designed to extract sentiments from text-based feedback and facial expressions using Machine Learning (ML), Deep Learning (CNN-based image classification), and AI reasoning (LLMs via LangChain).
A real-time multimodal AI companion that uses facial emotion detection, speech sentiment analysis, and an LLM dialogue system to provide empathetic, adaptive, and personalized interactions. The system tracks mood trends, adjusts its communication style dynamically, and presents its insights through an interactive dashboard.
anshulghogre4
AI-powered insurance operations hub with multi-agent sentiment analysis, claims triage, and fraud detection. Built with .NET 10, Angular 21, and Semantic Kernel — featuring a 5-provider LLM fallback chain (Groq/Mistral/Gemini/OpenRouter/Ollama), 5 multimodal services, and PII redaction pipeline All on free-tier AI providers.
All 25 repositories loaded