Found 29 repositories(showing 29)
zgsdlwzzh
Text classification using machine learning model(KNN,Logistic,Decision tree,SVM)and deeping model(TextCNN,BiLSTM,Fasttext,RCNN,Transformer,ATT_BiLSTM,ATT_CNN_BiLSTM)
atikul-islam-sajib
This is for learning purposes—showcasing how a multi-modal classification model works with both images and text. It demonstrates combining visual features from CNNs or Vision Transformers with textual embeddings from language models. The implementation highlights feature fusion techniques and classification over the joint representation space.
ShreyasS8
Implementations of CNN-LSTM and Transformer models in PyTorch for multi-category text classification.
kanisk29
End-to-end multi-label toxicity detection system leveraging RNN, CNN, and Transformer architectures for robust text classification. Includes data preprocessing, training, evaluation, and scalable deployment for real-time inference.
This project investigates how the Attention mechanism improves Arabic text classification. It compares three deep learning models, LSTM and CNN, that lack Attention mechanism, with AraBert, a Transformer model that utilizes Attention.
AbdullahUsman0
Crisis text classification (Transformer, SVM, NB, LSTM, CNN), multilingual NER, BART summarization, misinformation detection, and RAG-based Q&A over official docs. Whisper voice input supported, with configurable FFmpeg/models. Dark UI shows probabilities and source attributions; models load locally.
azhermurad
Deep Learning with TensorFlow Implemented deep learning models using TensorFlow and Keras for real-world tasks such as image classification, text processing, and time series prediction. Built custom neural networks including CNNs, RNNs, and transformers. Worked on model training, evaluation, tuning, and deployment with TensorFlow tools
harinduashan
Video Summarization (VS) has been recognized as one of the most interested research and development field since the late 2000s. Generation of correct and adequate summaries for the given video is the end goal of the VS. There are different sub fields evolved since then such as Video Synopsis, Video Storytelling, Text-based Video Summaries (TVS), etc. Improvements in the Vision area with Convolutional Neural Network (CNN) approach have been accelerated this field further with all the ML categories such as Supervised Learning (SL), Unsupervised Learning (UL), and Reinforcement Learning (RL). Current State-of-the-Art (STOA) methods show that the usage of Natural Language Processing (NLP) and Transformer based solutions would make VS into a viable solution. However, the TVS area is yet to be investigated into the feasibility and real-world application. To fill this gap in TVS area, we introduce 3ML-TVS, called Three different Machine Learning to Text-based Video Summarization, a feasible solution that is made from the existing ML methods in Action Classification, Object Classification, and NLP. By fine-tuning each model individually, the result can be generated with promising accuracy. The proposed system is demonstrated the capacity of being applied to the real-world application also. This solution also proves that existing ML models have the capability to tackle much harder problem with simple systematic approaches rather than implementing a gigantic ML network.
No description available
No description available
Hiteshankodia
Working on cnn, transformers, and image to text. image classification etc
kwsthsve
Text classification and POS tagging with MLPs, RNNs, CNNs and Transformers.
jashshah-dev
Using State of Art transformers for text classification and deep CNNs for Image Classification
twamaa
LSTM, CNN and Transformer Deep Learning Models for Text Classification Tasks in Swahili
karthigeyan95
Implementing different architectures of deep learning (CNN, RNNs, Transformers) for the text classification problem
Developed a multimodal AI system using transformer and CNN architectures to analyze text and audio sentiment, implementing text analysis with Hugging Face Transformers for real-time classification
rejae
Basicly, using the existing method to train text classification program. For examples, CNN, RNN, LSTM, Transformer and so on.
moeez1234
CNN Digit Classification • Voice Sentiment Analysis • English-to-Urdu Translation A full-stack AI system that combines: ✍️ Handwritten Digit Classification (CNN model for 0-9 digits) � Voice-Based Sentiment Analysis (Transformer + LSTM/GRU, with Speech-to-Text) 🌍 Language Translation (English → Urdu using pretrained transformers)
Sporcl
The Vision Transformer (ViT) is a deep learning model designed for image classification, adapting concepts from natural language processing. Unlike traditional convolutional neural networks (CNNs), ViT employs a transformer architecture originally developed for text tasks.
saeidaryadoust
Neural Networks & Deep Learning projects: SOM clustering/classification on digits; SLFN from scratch on Titanic; NAR/NARX forecasting with Delhi Climate; CNN autoencoder for image denoising; CNN for news text classification; Transformer built from scratch for automated essay feedback. Clean code, clear reports, reproducible experiments.
sdach13-cmd
This project describes about our project topic classification using NLP Methods. In this project, we aim to compare three prominent deep learning architectures for text classification: Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Bidirectional Encoder Representations from Transformers (BERT).
radwanzoubi1
Sequential text classification to predict positive or negative Amazon reviews using CNNs and Transformers. Includes preprocessing, embedding layers, hyper-parameter tuning, and evaluation with metrics like F1-score. Features model comparison, visualizations, and advanced analyses.
Devendra2106
Deep learning, machine learning, neural networks, CNN, RNN, LSTM, transformers, NLP, Hugging Face, FLAN-T5, PyTorch, TensorFlow, BERT, GPT, attention models, sequence modeling, text classification, image recognition, AI models, model training, transfer learning, applied AI.
0xDamian
A TensorFlow-based multimodal steganography detection system using a hybrid CNN-Vision Transformer-BERT model to identify hidden messages in images (Toy-Bossbase dataset) and synthetic text, with metrics like PSNR, SSIM, BERTScore, and classification accuracy.
This repository contains a text classification pipeline designed to categorize physician-patient dialogue into 15 clinically relevant categories based on Social Support Theory. It evaluates traditional ML models (Logistic Regression, SVM), deep learning (CNN), and transformers (ClinicalBERT, DistilBERT).
ahmdmohamedd
A Convolutional Neural Network-based sentiment analysis model for Twitter data. This project utilizes TensorFlow to classify tweets as positive or negative using the "training.1600000.processed.noemoticon.csv" dataset. It demonstrates text classification with CNN without relying on Transformers.
Sentiment analysis of 1.6 million tweets using deep learning techniques, including RNN, CNN-LSTM, BiLSTM, Stacked GRU, BiGRU, and Transformer models. The project demonstrates effective data preprocessing, model optimization, and the implementation of advanced NLP methods for text classification.
YoloPopo
This repository compares text classification models for detecting disaster-related tweets, including traditional ML (TF-IDF, BoW), neural networks (RNN, LSTM, CNN), and fine-tuned transformers (DistilBERT, ELECTRA). It features a robust preprocessing pipeline, hyperparameter tuning, and Kaggle submissions for performance evaluation.
tammannashaik
Multimodal Sentiment Analysis | PyTorch, BERT, ResNet, Torchaudio Built a deep learning model that predicts sentiment (Positive, Neutral, Negative) from text, audio, and facial image inputs using BERT, MFCC + CNN, and ResNet18; implemented multimodal feature fusion and classification with PyTorch and Hugging Face Transformers.
All 29 repositories loaded