Found 179 repositories(showing 30)
ajheshbasnet
lstm based next word prediction — a simple pytorch implementation of an lstm model for predicting the next word in a sequence. trained on custom text data with tunable hyperparameters for experimentation.
anmoldhiman17
Deep Learning based Next Word Prediction system using LSTM built with TensorFlow and deployed using Streamlit.
Vamsi404
This project is an emoji prediction model using an LSTM-based RNN and GloVe word embeddings. Given short text inputs, the model predicts the most relevant emoji from a set of predefined classes. The code includes data preprocessing, training with two LSTM model variations, and predictions with visual output using the `emoji` library.
• Trained and deployed emotion detector with Word Embeddings, LSTM, BERT using TensorFlow and Transformers. • Fine tunned the Bert-base-cased Encoder Transformer with Tensorflow classification head, provides prediction out of 6 different emotions of the given input tweet with 95% accuracy.
leenasuva
The problem highlights the use of machine learning algorithms to categorize different comments scraped from an online platform and make relevant predictions about the topics associated with those comments. There are a total of 40 topics to classify these comments. Even though the problem seems like a simple classification problem, as we dive deeper to understand the data, we realize that the real problem asks us to make sense of the comments mentioned in the dataset and then assign categories. Since the number of topics/classes is much greater than any common classification problem, the expected accuracy won’t be too high. These days, Topic Modeling and Classification have received tremendous popularity when analyzing products and services for various brands, during election times to measure popularity, discover public sentiments around multiple issues, etc. Primarily deriving meaningful topics from these comments is incredibly challenging because of variations in language, insertion of emojis, and use of partial and profane comments. It is essential to choose a scheme that translates the comments to word embeddings to calculate some similarity between those comments to assign relevant topics; it is also imperative to translate the context and meaning of those comments and cluster them to relevant topics. There are multiple approaches to Topic Modeling, such as Latent Dirichlet Analysis (LDA) and Probabilistic Latent Semantic Analysis (LSA). These benchmark techniques utilized for such problems seem to provide viable results. The initial approach was to use Tf-Idf and Word2Vec to vectorize the comments and then use state-of-the-art classification techniques to assign topics to these vectors. When utilized, bag-of-Words with Tf-Idf and Word Embedding with Word2Vec would pose a significant hidden problem. The main problem with these approaches is that they treat the exact words with different meanings identically without adding any context to them. For example, the term “bank” in “Peter is fishing near the bank.” and “Two people robbed the state bank on Monday.” would have the same vectors in this representation. This approach would give us misleading results, and therefore, to improve the performance of our prediction mechanisms, it is essential to switch to a process that finds a way to translate the context of the words. Transformers: a reasonably new modeling technique, presented by Google’s research professionals in their seminal paper “Attention is All You Need,” tackles the exact problem. Google’s BERT (Bidirectional Encoder Representations from Transformers) combines ELMO context embedding and several Transformers, plus it’s bidirectional (which was a big novelty for Transformers). The vector assigned to a word using BERT is a function of the entire sentence; therefore, a word can have different vectors based on the context. ELMO is a word embedding technique that utilizes LSTMs to look at each sentence and then assigns those embeddings.
dharmpatel28
Created an LSTM language model to predict the next word in a sentence from raw course content. Implemented a full preprocessing pipeline including tokenization, numerical encoding, and padding. Designed an Embedding → LSTM → Linear architecture and a recursive generation loop to produce coherent multi-word sequences from seed text.
Sambit2304
No description available
Elleradja
No description available
No description available
YashPratapRai
Deep Learning-based Next Word Prediction system built with LSTM and deployed using Streamlit.
Adi3042
LSTM-based next word prediction project demonstrating sequence modeling and real-time inference using Streamlit.
dinraj910
Next-word prediction language model built using LSTM (RNN) with TensorFlow/Keras, implementing word-level tokenization, sliding window sequence generation, and softmax-based vocabulary prediction.
Shashank7275
LSTM-based deep learning project for next word prediction using TensorFlow and Streamlit, designed to mimic real-time autocomplete systems.
"GitHub repository for LSTM-based next word prediction. Includes code for training a model on text data, enabling tasks like autocomplete and text generation
Dipanjan777777
This project implements a next-word prediction system using an LSTM-based language model trained on Shakespeare's Macbeth from the NLTK Gutenberg corpus.
ranashardul
Neural language model built with LSTM and GRU to predict the next word in Shakespeare’s Hamlet. Implements tokenization, n-gram sequence generation, word embeddings, and softmax-based prediction with a Streamlit deployment.
LinAnnJose
This project builds a word prediction model using an LSTM (Long Short-Term Memory) network in TensorFlow/Keras, with a focus on predicting the next word in a sequence based on input text.
vanshkamra12
LSTM-based deep learning model trained on Shakespeare’s Hamlet to predict the next word in a text sequence. Includes data preprocessing, model training, and deployment via a Streamlit app for real-time word prediction.
Sentiment analysis of student reviews using ML and LSTM models, combined with an LSTM-based next-word prediction trained on a Wikipedia sample. Includes preprocessing, model training, and evaluation notebooks for practical NLP experimentation and insights.
Bidirectional LSTM-based Recurrent Neural Language Model (BLSTMRNN) designed for next-word prediction. The model incorporates character-aware embeddings, caching mechanisms, and optional sampling-based approximations to improve efficiency and reduce perplexity (PPL).
Train a TensorFlow LSTM model for next word prediction in text sequences. Generate accurate predictions for text generation and autocomplete. Easy installation, step-by-step usage guide, and reliable results. Enhance your language-based applications effortlessly.
rohan-06-eng
Deployed on Streamlit Cloud, this project implements a Next Word Prediction model using Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) to predict the next word in a given sequence of text. It leverages deep learning techniques to analyze and understand text patterns, enabling accurate word predictions based on contextual input.
Chiragjjjks
This project implements a Next Word Prediction model using an LSTM-based Recurrent Neural Network (RNN). The dataset used is shakespeare-hamlet from the NLTK Gutenberg corpus. The model is trained to predict the next word in a given sentence based on Shakespearean text.
NAVANEETHELITE
Most of the keyboards in smartphones give next word prediction features. Google uses next word prediction based on our browsing history. So a preloaded data will be stored in the keyboard function of our smartphones to predict the next word correctly. This LSTM model is trained for 40 epochs and gave an accuracy of 69%.
Momindiyar
Build a Next Word Prediction model using LSTM. Train on text data, tokenize words, and create input-output sequences. Use an embedding layer, LSTM layers, and a dense softmax layer for prediction. Train with cross-entropy loss and evaluate with accuracy and BLEU score. Generate words based on input text.
Ehsangh1381
A deep learning-based text prediction model using (LSTM/GRU) for generating the next word in a sequence. Trained on large text datasets, it can be used for language modeling, autocomplete, or chatbot development.
Chandra731
This project aims to build a next-word prediction system using an LSTM model. The model is trained on Shakespeare's Hamlet, providing predictions based on the language style of the text. A simple Streamlit app allows users to input a sentence, and the app predicts the next word in the sequence using the trained model.
kamuju-vinay
This project implements a next word prediction system using Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) models. The aim is to leverage deep learning techniques to predict the next word in a sequence based on the preceding context, enhancing applications like text completion and chatbots
Gautam825406
A deep learning-based next-word prediction web app built with PyTorch, trained on Sherlock Holmes text data, and deployed through Streamlit. The model uses an LSTM neural network to suggest the top probable next words based on user input, similar to a smartphone predictive keyboard.
Hasib-Bhuiyan7
This project contains a neural network based SMS spam detection. It utilizes TensorFlow and Keras to build and train a deep learning model that classifies messages as either "spam" or "ham." The model uses text vectorization, word embeddings, and LSTM layers for effective classification. Includes data preprocessing, evaluation and predictions