Found 198 repositories(showing 30)
FuChong-cyber
Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.
Nocatnoyou
FL-EASGD: Federated Learning Privacy Security Method Based on Homomorphic Encryption
[Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning" (https://www.usenix.org/conference/usenixsecurity24/presentation/li-songze)
yjlee22
PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance
JiangChSo
Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages cryptographic accumulators to generate membership proofs by accumulating users IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication.
shentt67
[ACM Computing Survey 2025] Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey, by MARS Group at Wuhan University.
Medical data is often highly sensitive in terms of data privacy and security concerns. Federated learning, one type of machine learn- ing techniques, has been started to use for the improvement of the privacy and security of medical data. In the federated learning, the training data is distributed across multiple machines, and the learning process is performed in a collaborative manner. There are several privacy attacks on deep learning (DL) models to get the sensitive information by attackers. Therefore, the DL model itself should be protected from the adversarial attack, especially for applications using medical data. One of the solutions for this prob- lem is homomorphic encryption-based model protection from the adversary collaborator. This paper proposes a privacy-preserving federated learning algorithm for medical data using homomor- phic encryption. The proposed algorithm uses a secure multi-party computation protocol to protect the deep learning model from the adversaries. In this study, the proposed algorithm using a real-world medical dataset is evaluated in terms of the model performance.
SPIN-UMass
Code for USENIX Security 2023 Paper "Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks"
SamuelGong
[USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning
Szczecin
多方安全计算技术&联邦学习相关/Multi party security computing technology federated learning
SyedUmaidAhmed
This is the official implementation of Federated and Split Learning on multiple Raspberry Pi's. It is the demonstration of training the data on edge devices without having threat to security issues.
Federated learning is a distributed learning method that trains a deep network on user devices without collecting data from central server. It is useful when the central server can’t collect data. However, the absence of data on central server means that deep network compression using data is not possible. Deep network compression is very important because it enables inference even on device with low capacity. In this paper, we proposed a new quantization method that significantly reduces FPROPS(floating-point operations per second) in deep networks without leaking user data in federated learning. Quantization parameters are trained by general learning loss, and updated simultaneously with weight. We call this method as OQFL(Optimized Quantization in Federated Learning). OQFL is a method of learning deep networks and quantization while maintaining security in a distributed network environment including edge computing. We introduce the OQFL method and simulate it in various Convolutional deep neural networks. We shows that OQFL is possible in most representative convolutional deep neural network. Surprisingly, OQFL(4bits) can preserve the accuracy of conventional federated learning(32bits) in test dataset.
Offical code for the paper "Turning Privacy-preserving Mechanisms against Federated Learning" accepted at ACM Conference on Computer and Communications Security (CCS) 2023
brechtvandervliet
Mobile devices contain highly sensitive data, making them an attractive target to attackers. As an Android malware classifier, LiM aims to tackle security issues while respecting the privacy of users by leveraging the power of federated learning. Compared to centralized ways of learning, the unique properties of federated learning open up new attack surfaces for adversaries. For instance, an adversary can attempt to let a targeted malicious app be misclassified as clean by sending poisoned model updates in the federation. This work builds on LiM with the aim of improving its resistance against these poisoning attacks. First, I formulate and test several targeted model update poisoning attacks. Depending on assumptions regarding the adversary's knowledge, the attacks are able to successfully compromise around 10 to 25\% of the honest client devices in the federation. Second, while most defenses result in a trade-off between improving resistance and maintaining performance, I propose a simple defense strategy that can never decrease the performance of the federation. Against a strong adversary, who has knowledge of the algorithm used to aggregate the model updates, the defense was mostly insufficient to prevent poisoning. In the presence of a more realistic adversary, the defense caused LiM to regain best-case performance, comparable to the performance in a scenario without adversary.
hamidmozaffari
Code for USENIX Security 2023 Paper "Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks"
Ali-hey-0
FSociety Genesis is an advanced botnet/C2 simulation demonstrating cutting-edge offensive security techniques in a single Python framework. This lab-safe implementation includes quantum cryptography, AI-driven attacks, federated learning, blockchain command tracking, and multi-vector attack capabil
changhongyan123
Official code for the paper "Efficient Privacy Auditing in Federated Learning", published at the 33rd USENIX Security Symposium (USENIX Security 2024).
rasidi3112
Experimental platform exploring the integration of Federated Learning, Quantum Machine Learning (VQC, QKA), and Post-Quantum Cryptography. Built with PennyLane, FastAPI, and Flutter. Features quantum-enhanced aggregation, zero-noise extrapolation, and Kyber/Dilithium security. Research/educational project - not production ready.
tvsaimanoj
A Personal Academic Bibiliography of "Security in Federated Learning"
Advances in cellular technology are a key driver of the growing automotive Vehicle to Everything (V2X) market. In V2X communications, information from sensors and other sources travels via high-bandwidth, low-latency, high-reliability links, paving the way to fully autonomous driving and intelligent mobility. With the future adoption of 5G and beyond (5G&B) networks, V2X is likely to generate a huge volume of data, which encourages the use of edge computing and pushes the system to learn the model locally to support real-time applications. However, the edge computing paradigm raises concerns about the security and privacy of local nodes (e.g., vehicles) and the increased risk of cyberattacks. In this article, we identify open research questions, key requirements, and potential solutions to provide cyber resilience in V2X communications.
Arjuna247
Industrial IoT Predictive Maintenance System with Edge AI, Federated Learning, and Blockchain Security
Federated Learning (FL) is a collaborative machine learning approach that enables decentralized data processing. Instead of collecting and storing data in a central server, FL trains machine learning models directly on devices or servers where the data resides, enhancing privacy and security.
ning-wang1
This is a repository for our paper FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations is accepted by the 17th ACM ASIA Conference on Computer and Communications Security (AsiaCCS 2022)
iflytek
Iflearner Flow is a multi-party joint task security scheduling platform based on the underlying federated learning framework Iflearner
futabato
Federated Learning Framework for security researcher
abhitall
Federated learning framework for robust temporal credit risk modeling with security and monitoring components.
No description available
vasilisevag
No description available
fernandocmtz
This project explores security threats in Federated Learning (FL) by implementing a Label Flipping Attack and using Median Aggregation as a defense. It includes a CNN model, FL server-client setup, and evaluation metrics like accuracy and convergence. Built with PyTorch, FLWR, NumPy, and Matplotlib.
No description available