Investigating the robustness of various supervised learning models against adversarial attacks on the MNIST dataset. Implements the Fast Gradient Sign Method (FGSM) to generate adversarial examples and evaluates model performance under varying perturbation levels. Includes Jupyter Notebook with code, visualizations, and analysis.
Stars
7
Forks
0
Watchers
7
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
15
commits
Merge pull request #3 from adityaMachal/feature/adversarial_attack_eval
4c84770View on GitHubMerge pull request #2 from adityaMachal/feature/mlops-productization
7ae18beView on GitHubadd app.py : Create FastAPI app with MNIST prediction endpoint
fce632eView on GitHubAdd predict.py: load RandomForest model and test prediction
9024743View on GitHubAdd train_rf.py: train RandomForest on MNIST and save model
82e7d78View on GitHubMerge pull request #1 from adityaMachal/adityaMachal-patch-1
062045bView on GitHub