PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2024-08-20T12:00:00+00:00

Implementation Challenges in Privacy-Preserving Federated Learning

In this post, we talk with Dr. Xiaowei Huang and Dr. Yi Dong (University of Liverpool), Dr. Mat Weldon ( United Kingdom (UK) Office of National Statistics (ONS)), and Dr. Michael Fenton (Trūata) who were winners in the UK-US Privacy-Enhancing Technologies ( PETs) Prize Challenges. We discuss implementation challenges of privacy-preserving federated learning (PPFL) - specifically, the areas of threat modeling and real world deployments. Threat Modeling In research on privacy-preserving federated learning (PPFL), the protections of a PPFL system are usually encoded in a threat model that defines


Mark Durkee

*Mark Durkee is Head of Data & Technology at the Centre for Data Ethics and Innovation (CDEI). He leads a portfolio of work including the CDEI's work on the Algorithmic Transparency Recording Standard, privacy enhancing technologies, and a broader programme of work focused on promoting responsible access to data. He previously led CDEI's independent review into bias in algorithmic decision-making. He has spent over a decade working in a variety of technology strategy, architecture and cyber security roles within the UK government, and previously worked as a software engineer and completed a PhD in Theoretical Physics.*


Original source

Reply