Found 131 repositories(showing 30)
cxy1997
Baseline classifiers on the polluted MNIST dataset, SJTU CS420 course project
PyTorch adversarial attack baselines for ImageNet, CIFAR10, and MNIST (state-of-the-art attacks comparison)
Simple pytorch classification baselines for MNIST, CIFAR and ImageNet
junayed-hasan
This repository implements knowledge distillation from classical to quantum neural networks for image classification. It includes experiments on MNIST and FashionMNIST datasets, demonstrating improved accuracy in quantum models. Code for classical teachers and quantum students, with baseline and distilled versions, is provided.
SatvikPraveen
A comprehensive analysis of the Fashion MNIST dataset using PyTorch. Covers data preparation, EDA, baseline modeling, and fine-tuning CNNs like ResNet. Includes modular folders for data, notebooks, and results. Features CSV exports, visualizations, metrics comparison, and a requirements.txt for easy setup. Ideal for ML workflow exploration.
VasundharaSK
How to Develop a Deep CNN for Fashion-MNIST Clothing Classification by Jason Brownlee on May 10, 2019 in Deep Learning for Computer Vision Tweet Share Last Updated on October 3, 2019 The Fashion-MNIST clothing classification problem is a new standard dataset used in computer vision and deep learning. Although the dataset is relatively simple, it can be used as the basis for learning and practicing how to develop, evaluate, and use deep convolutional neural networks for image classification from scratch. This includes how to develop a robust test harness for estimating the performance of the model, how to explore improvements to the model, and how to save the model and later load it to make predictions on new data. In this tutorial, you will discover how to develop a convolutional neural network for clothing classification from scratch. After completing this tutorial, you will know: How to develop a test harness to develop a robust evaluation of a model and establish a baseline of performance for a classification task. How to explore extensions to a baseline model to improve learning and model capacity. How to develop a finalized model, evaluate the performance of the final model, and use it to make predictions on new images. Discover how to build models for photo classification, object detection, face recognition, and more in my new computer vision book, with 30 step-by-step tutorials and full source code. Let’s get started. Update Jun/2019: Fixed minor bug where the model was defined outside of the CV loop. Updated results (thanks Aditya). Updated Oct/2019: Updated for Keras 2.3 and TensorFlow 2.0. How to Develop a Deep Convolutional Neural Network From Scratch for Fashion MNIST Clothing Classification How to Develop a Deep Convolutional Neural Network From Scratch for Fashion MNIST Clothing Classification Photo by Zdrovit Skurcz, some rights reserved. Tutorial Overview This tutorial is divided into five parts; they are: Fashion MNIST Clothing Classification Model Evaluation Methodology How to Develop a Baseline Model How to Develop an Improved Model How to Finalize the Model and Make Predictions Want Results with Deep Learning for Computer Vision? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. Download Your FREE Mini-Course Fashion MNIST Clothing Classification The Fashion-MNIST dataset is proposed as a more challenging replacement dataset for the MNIST dataset. It is a dataset comprised of 60,000 small square 28×28 pixel grayscale images of items of 10 types of clothing, such as shoes, t-shirts, dresses, and more. The mapping of all 0-9 integers to class labels is listed below. 0: T-shirt/top 1: Trouser 2: Pullover 3: Dress 4: Coat 5: Sandal 6: Shirt 7: Sneaker 8: Bag 9: Ankle boot It is a more challenging classification problem than MNIST and top results are achieved by deep learning convolutional neural networks with a classification accuracy of about 90% to 95% on the hold out test dataset.
javier-hernandezo
Baseline and improved versions (using CoordConv) of a STN for MNIST classification
In the search for more effective Deep Learning methods to Overcoming the need for large annotated data sets, we have seen a great deal of research interest in recent years in semi-supervised learning methods. Semi-supervised learning is halfway between supervised and unsupervised learning. In addition to the untagged data, the algorithm is provided with a supervised task. In this context, our work consists in classifying MNIST images from the Tensorflow library using the method used in the article [1]. In this paper, the pretext task is the recognition by the network of the geometrical shapes applied to the data. Following this method, we will first train on the unlabelled data, a network for predicting the geometrical transformations that we call basic network. The aim being to draw knowledge from it. Then we use this knowledge as a starting point for our supervised model. Finally, to compare the contribution of unsupervised learning on the performance of the network, we build a network called "Baseline" with the same architecture as the base network. The baseline network is built according to the NIN (Network-In-Network) architecture where we have three layer blocks each containing three convolution layers. The convolution layers each have different filters. After each conv2D layer a batch normalization and a "read back" activation layer is performed. Between the first and the second block a MaxPooling is performed. Between the second and third block, an averagePooling2D is performed. At the end one performs a globalAverage Pooling and then a dense layer and a softmax activation layer are added. Once the basic network has been trained, the first two blocks are frozen and the weights of the last block can be modified. We take the exit of the 4th last layer starting from the last layer. This layer corresponds to the last layer before the classification layers. The outputs of this layer are the characteristics of the image. Then we add a Dense layer (10, activation="softmax") for the prediction of our 10 classes (0,1,...,9). The semi-supervised network is thus built. The data at our disposal are MNIST images containing 70000 images of which 60000 are for training and 10000 for testing. For the work required, only 100 images with labels are to be taken and the rest, 59900 images without labels. For the basic network (the one that predicts the rotations), we took 80% of the 59900 for training and 20% for validation. For the "BaseLine" network, we built a supervised network that has the same architecture as the previous semi-supervised network and trained it to recognise semantic features and to predict image classes. The classes here being the 0,1,2,3,4,5,6,7,8,9 To train this network, we used the 100 images we initially took from the 60,000 train images and left the other 5,9900 images aside. We did not apply here any geometric transformation (rotation) to the data. We then tested the network on the test data set and compared its performance with that of the semi-supervised network. Once the network was built according to the indicated method, we obtained an overall performance of 99.53% for the basic network, 86.48% for the semi-supervised network and 11% for the "BaseLine" network. In view of these results, we can say that the combined use of supervised and unsupervised learning techniques improves the performance of the network.
hwang1996
Baseline of Fashion MNIST by dilated CNN
Researched federated learning robustness under temporal data drift using Flower and PyTorch. Simulated seasonal shifts in Fashion-MNIST to test model adaptation as client data evolved. Implemented FedAvg with configurable client selection and local epochs, comparing centralized and FL baselines.
FamilDardashti
A three-phase project for MNIST handwritten digit classification using the KNN algorithm. Includes EDA, scikit-learn baseline, NumPy implementation, and an interactive web app for real-time predictions.
greydanus
Simple MNIST baselines for 1) numpy backprop 2) dense nns 3) cnns 3) seq2seq
Alisadaq
Mini research project: evaluating baseline architectures and TinyVGG on FashionMNIST dataset.
maddiepr
PyTorch baselines: MNIST CNN + ResNet transfer learning (reproducible).
heloa-net
Fashion MNIST baseline and classifier
mayukhchatterjee7029
This repository contains a baseline implementation for classifying the **Fashion MNIST dataset** using TensorFlow and Keras.
Santiago-HR
MNIST digit classification with TensorFlow/Keras: preprocesses MNIST, trains/evaluates MLP baselines (8 vs 128 hidden units), then builds a CNN to reach ~99%+ performance. Includes accuracy/loss curves, classification report, confusion matrix, and prediction visualizations.
Reproduce and improve the baseline accuracy from a paper for the classification task of fashion article images obtained from the Fashion-MNIST datase
DataShoaib
MNIST handwritten digit classifier built using an Artificial Neural Network (ANN). Processes 28×28 images and predicts digits (0–9) as a deep learning baseline model.
shashankfml
MS thesis exploring multimodal embeddings in federated learning under non-IID data. Using BLIP image–caption features with FedAvg, the work shows accuracy gains over CNN/ViT baselines on CIFAR, MNIST, and medical data, with analyses of cluster alignment, client drift, and aggregation stability.
mccainalena1
Deep learning project using convolutional neural networks (CNNs) with Keras/TensorFlow on the Fashion-MNIST dataset. Includes baseline and optimized architectures, training notebooks, and a final project report with evaluation results.
kondster
Implementing and comparing various semi-supervised learning techniques on the MNIST dataset to enhance model performance with limited labeled data. Features experiments with Baseline Model, Entropy Minimization, Pseudo Labeling, Virtual Adversarial Training, and K-means Pseudo Labeling.
KetanGhungralekar
A logistic regression model for classifying handwritten digits using the MNIST dataset. This project includes training, evaluation, and performance metrics such as accuracy, confusion matrix, precision, recall, and F1-score. A simple baseline for image classification tasks using PyTorch.
swesan123
A lightweight convolutional neural network implemented in PyTorch for image classification on Fashion-MNIST and CIFAR-100. Includes dataset preprocessing, model training, validation curves, and benchmark evaluation. Designed to demonstrate practical deep-learning workflow and baseline performance on common vision datasets.
yashdhuppe04
Handwritten Character Recognition using CNN (MNIST) This repository contains my CodeAlpha Internship Task-3 implementation using Convolutional Neural Networks for handwritten digit recognition. It includes baseline and advanced CNN models, performance evaluation, ROC-AUC analysis, Grad-CAM visualization, and efficiency comparison.
DamanRiat
This project presents my work on developing a neural network for multiclass classification using the MNIST database. It involves data preparation, baseline model creation, hyperparameter tuning, and result visualization. This comprehensive approach demonstrates my proficiency in deep learning and my ability to create effective image classification
This repository includes two image classification experiments built with CNNs: - Cats vs Dogs (binary classification) - MNIST Digits (10-class classification: 0 to 9) The project provides a graphical user interface (`interface.py`) to run preprocessing, training (Baseline/Alternative and MinPooling variants), model loading, and image prediction.
kumar0232
This project builds a Handwritten Digit Recognition system using CNN and SVM on the MNIST dataset. CNN extracts image features for high-accuracy digit prediction, while SVM acts as a baseline model. A web app lets users draw digits, process them, and get instant predictions in real time.
saisuryasashanklenka
The MNIST handwritten digit classification problem is a standard dataset used in computer vision and deep learning. Although the dataset is effectively solved, it can be used as the basis for learning and practicing how to develop, evaluate, and use convolutional deep learning neural networks for image classification from scratch. This includes how to develop a robust test harness for estimating the performance of the model, how to explore improvements to the model, and how to save the model and later load it to make predictions on new data. In this tutorial, you will discover how to develop a convolutional neural network for handwritten digit classification from scratch. After completing this tutorial, you will know: How to develop a test harness to develop a robust evaluation of a model and establish a baseline of performance for a classification task. How to explore extensions to a baseline model to improve learning and model capacity. How to develop a finalized model, evaluate the performance of the final model, and use it to make predictions on new images.
Linsho
Basic baseline training script of MNIST