PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2024-03-21T12:00:00+00:00

Protecting Model Updates in Privacy-Preserving Federated Learning

In our second post we described attacks on models and the concepts of input privacy and output privacy. ln our last post, we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we explore the problem of providing input privacy in PPFL systems for the horizontally-partitioned setting. Models, training, and aggregation To explore techniques for input privacy in PPFL, we first have to be more precise about the training process. In horizontally-partitioned federated learning, a common approach is to ask each participant to


David Darais

David Darais is a Principal Scientist at Galois, Inc. and supports NIST as a moderator for the Privacy Engineering Collaboration Space. David's research focuses on tools for achieving reliable software in critical, security-sensitive, and privacy-sensitive systems. David received his B.S. from the University of Utah, M.S. from Harvard University and Ph.D. from the University of Maryland.


Original source

Reply