
A new journal article, “Privacy-Preserving Hyperparameter Tuning for Federated Learning,” authored by Natalija Mitic, Apostolos Pyrgelis, and Sinem Sav, has been published in IEEE Transactions on Privacy.
The paper addresses the challenge of hyperparameter tuning in federated learning while maintaining data privacy. Federated learning enables multiple entities to collaboratively train machine learning models without sharing raw data. However, optimizing hyperparameters such as learning rate and momentum in this setting can introduce privacy risks, as information about local datasets may be inferred from shared tuning results.
The authors analyze different strategies for tuning hyperparameters in federated learning environments and introduce PRIVTUNA, a framework based on multiparty homomorphic encryption. This framework enables privacy-preserving hyperparameter tuning by ensuring that local hyperparameter values remain confidential while still allowing for efficient aggregation at the central server. The study evaluates the computational and communication overhead of this approach, demonstrating its applicability in both IID and non-IID federated learning settings.
The research contributes to privacy-preserving machine learning techniques and is part of the Horizon Europe HARPOCRATES project. The findings have implications for federated learning applications in sectors requiring strict data confidentiality, such as healthcare and finance.
The full paper is available at Zenodo.
Recent Posts
- Privacy-Preserving AI and Data Analysis – HARPOCRATES Technical Brief Paper
- National Seminar Horizon HARPOCRATES — Responsible AI: Risks, Uses and Limits
- Project video: Unlocking Smarter, Safer AI with HARPOCRATES
- Privacy-Preserving Machine Learning for Health and Security
- Recordings and Presentations from the AI & Security Webinar Now Available