Found 416 repositories(showing 30)
hwchong
This is a sample project demonstrating the use of Keras (Tensorflow) for the training of a MNIST model for handwriting recognition using CoreML on iOS 11 for inference.
wagenaartje
:pencil2: The Google Quick Draw dataset ported to a MNIST-like 28x28 greyscale array dataset, useable by JavaScript machine-learning libraries
rhammell
Draw and classify digits (0-9) in a browser using machine learning
NhanPhamThanh-IT
✏️ An AI-driven web app for handwritten digit recognition using the MNIST dataset. It leverages TensorFlow for deep learning model training and Gradio to create an intuitive, interactive UI. Users can draw digits and receive instant predictions, showcasing practical AI deployment and real-time inference capabilities.
mirzayasirabdullahbaig07
Handwritten Digit Classifier using a trained MNIST model. Draw or upload a digit (0-9) and get its predicted value instantly.
It is a Python GUI in which you can draw a digit and the ML Algorithm will recognize what digit it is. We have used Mnist dataset
leonhardt-jon
Draw MNIST digits and classify in real time!
rahulsrma26
A drawable MNIST demo using streamlit.
salvacorts
:bar_chart: An interactive GUI to draw numbers and recognize them using a CNN written with Keras and trained with MNIST
foersterrobert
Streamlit web application that uses trained models (e.g. CNNs) to classify digits drawn by users or generated by a Conditional-WGAN-GP. One can choose between models from Pytorch, Keras, and Scikit-learn.
AshwinPrksh00
A streamlit web app made with streamlit-drawable-canvas and a pre-trained MNIST model
aobeirne20
Python GUI and PyTorch based backend to train a DNN or CNN on the MNIST handwriting dataset, classify digits the user draws in the GUI, and save hyperparameter info for a MATLAB graph.
olafbielasik
This project is a Handwritten Digit Recognizer built with PyTorch and Flask. It trains a neural network on the MNIST dataset and deploys it in a web application where users can draw digits on a canvas and receive real-time predictions, showcasing an effective integration of deep learning and user-friendly web development.
mohamedkhayat
This project is a digit recognition system using a neural network model trained on the MNIST dataset. Users can draw digits in a PyGame window, and the model will recognize the drawn digit. This project was created as a learning exercise to apply concepts from an intro to deep learning course and to incorporate an interactive element to the models
MarioProjects
Interactive digit recognition - with MNIST
getnamo
server counterpart for drawing on webclients and sending via socket.io to ue4 mnist classifier client
layumi
Draw mnist
stnk20
No description available
YeongHyeon
No description available
scrambledpie
A few simple notebooks for playing with neural networks in Keras. One for drawing MNIST numbers, one for parsing pictures to Cifar10 style.
data-man-34
Small projects in TensorFlow: CNN for Google Quick Draw Game and MNIST; LSTM-RNN, SVM, and the Min-Max-Module for SJTU Emotion EEG classification, etc.
stu00608
A web canvas that you can draw and see the MNIST classification result distribution.
javadAlikhani-ML
In this project, we built a CNN model using mnist data that can predict the numbers that the user draws.
sadopc
Neural network from scratch in Swift with Metal GPU acceleration. Train on MNIST, draw digits, and visualize activations in real-time.
grassEqualsBugs
A classic MNIST network written using only Numpy (no Pytorch or Tensorflow), along with a prediction frontend for users to draw digits written in React.
In the search for more effective Deep Learning methods to Overcoming the need for large annotated data sets, we have seen a great deal of research interest in recent years in semi-supervised learning methods. Semi-supervised learning is halfway between supervised and unsupervised learning. In addition to the untagged data, the algorithm is provided with a supervised task. In this context, our work consists in classifying MNIST images from the Tensorflow library using the method used in the article [1]. In this paper, the pretext task is the recognition by the network of the geometrical shapes applied to the data. Following this method, we will first train on the unlabelled data, a network for predicting the geometrical transformations that we call basic network. The aim being to draw knowledge from it. Then we use this knowledge as a starting point for our supervised model. Finally, to compare the contribution of unsupervised learning on the performance of the network, we build a network called "Baseline" with the same architecture as the base network. The baseline network is built according to the NIN (Network-In-Network) architecture where we have three layer blocks each containing three convolution layers. The convolution layers each have different filters. After each conv2D layer a batch normalization and a "read back" activation layer is performed. Between the first and the second block a MaxPooling is performed. Between the second and third block, an averagePooling2D is performed. At the end one performs a globalAverage Pooling and then a dense layer and a softmax activation layer are added. Once the basic network has been trained, the first two blocks are frozen and the weights of the last block can be modified. We take the exit of the 4th last layer starting from the last layer. This layer corresponds to the last layer before the classification layers. The outputs of this layer are the characteristics of the image. Then we add a Dense layer (10, activation="softmax") for the prediction of our 10 classes (0,1,...,9). The semi-supervised network is thus built. The data at our disposal are MNIST images containing 70000 images of which 60000 are for training and 10000 for testing. For the work required, only 100 images with labels are to be taken and the rest, 59900 images without labels. For the basic network (the one that predicts the rotations), we took 80% of the 59900 for training and 20% for validation. For the "BaseLine" network, we built a supervised network that has the same architecture as the previous semi-supervised network and trained it to recognise semantic features and to predict image classes. The classes here being the 0,1,2,3,4,5,6,7,8,9 To train this network, we used the 100 images we initially took from the 60,000 train images and left the other 5,9900 images aside. We did not apply here any geometric transformation (rotation) to the data. We then tested the network on the test data set and compared its performance with that of the semi-supervised network. Once the network was built according to the indicated method, we obtained an overall performance of 99.53% for the basic network, 86.48% for the semi-supervised network and 11% for the "BaseLine" network. In view of these results, we can say that the combined use of supervised and unsupervised learning techniques improves the performance of the network.
goshkaaa
Train a small MLP on MNIST and draw digits for live predictions and quick fine-tuning.
senathenu
MNIST-trained model to predict the digit you draw!
Draw and recognize digits with GUI, tensorflow convolutionary neural-network , MNIST
animeshdutta888
Draw a sketch on browser and classify it as one of the fashion MNIST classes using Tensorflow.js