Skip to content

The FL-Eval-Kit repository hosts the implementation of novel metrics introduced in our research: the Absolute Variation Distance (AVD), and the D-Rank metric.

License

Notifications You must be signed in to change notification settings

jpmorganchase/fl-eval-kit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Federated Learning Evaluation Kit (FL-Eval-Kit)

The Federated Learning Evaluation Kit includes two components, the Absolute Variation Distance and the D-Rank metric; both published in the Advances in Information Retrieval, 46th European Conference on Information Retrieval.

[1] Papadopoulos, G., Satsangi, Y., Eloul, S., Pistoia, M. (2024). Absolute Variation Distance: An Inversion Attack Evaluation Metric for Federated Learning. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14611. Springer, Cham. https://doi.org/10.1007/978-3-031-56066-8_20

[2] Papadopoulos, G., Satsangi, Y., Eloul, S., Pistoia, M. (2024). Ranking Distance Metric for Privacy Budget in Distributed Learning of Finite Embedding Data. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14611. Springer, Cham. https://doi.org/10.1007/978-3-031-56066-8_21

Absolute Variation Distance: An Inversion Attack Evaluation Metric for Federated Learning

This paper introduces the Absolute Variation Distance (AVD), a lightweight metric derived from total variation, to assess data recovery and information leakage in FL. Federated Learning (FL) has emerged as a pivotal approach for training models on decentralized data sources by sharing only model gradients. However, the shared gradients in FL are susceptible to inversion attacks which can expose sensitive information. While several defense and attack strategies have been proposed, their effectiveness is often evaluated using metrics that may not necessarily reflect the success rate of an attack or information retrieval, especially in the context of multidimensional data such as images. Traditional metrics like the Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Mean Squared Error (MSE) are typically used as lightweight metrics, assume only pixel-wise comparison, but fail to consider the semantic context of the recovered data. AVD metric addresses these shortcomings.

Ranking Distance Metric for Privacy Budget in Distributed Learning of Finite Embedding Data

In this study, we show how privacy in a distributed FL setup is sensitive to the underlying finite embeddings of the confidential data. We show that privacy can be quantified for a client batch that uses either noise, or a mixture of finite embeddings, by introducing a normalised rank distance (d-rank). This measure has the advantage of taking into account the size of a finite vocabulary embedding, and align the privacy budget to a partitioned space. We further explore the impact of noise and client batch size on the privacy budget and compare it to the standard $\epsilon$ derived from Local-DP.

Instructions

The files AVD_example.py and Drank_DP_example.py contain examples for the AVD and the D-rank metrics/papers. The user can execute the files to see the results of d-rank in IMDB data and of AVD in MNIST dataset.

About

The FL-Eval-Kit repository hosts the implementation of novel metrics introduced in our research: the Absolute Variation Distance (AVD), and the D-Rank metric.

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published