List of Questions to Explore #113
Replies: 2 comments 1 reply
-
Here is few clarifications from my side:
Maybe I will add a simple addition (I hope it would be simpler:) ). We are fine-tuning the model pre-trained, for example, on ImageNet.
It would be great help to have a full list of information adversary should know to conduct inference attacks (model architecture, loss function, other hyperparameter used in training, etc.).
Currently, in OpenFL we don't use gradients exchange. Instead, we are exchanging checkpoints, so the same questions are valid with one addition that we mean by local update checkpoint not gradients. |
Beta Was this translation helpful? Give feedback.
-
Hi, I would like to add two new research questions here based on our meeting on Jul 7.
|
Beta Was this translation helpful? Give feedback.
-
Regarding the exploration of the MLPrivacy Meter during federated training using OpenFederatedLearning. Here are some questions raised during our video chat which can be very helpful for our integration and further development of our privacy meter.
About privacy meter
In the transfer learning setting, the base model is trained on dataset A. Now, we fine-tune the base model on dataset B and release the fine-tuned model B. Can we use the privacy meter to measure information leakage of model B on dataset A?
When using the shadow models to conduct membership inference attacks, is it necessary to train the shadow models for the same learning task as the target model? For example, can we use shadow models trained for classification tasks to conduct membership inference attacks on the target model built for face recognition?
Similar to the previous question, what if the adversary doesn’t know the loss used for training the target model? Can we still attack the target model with similar accuracy?
About privacy meter in federated learning:
What is the interaction required if we want to implement active attacks in FL to measure the privacy risk?
Who will use the privacy meter to measure the privacy risk (server or parties)? If the server measures privacy risk, what is the information server needs to broadcast to local parties in the end?
The interesting application is to enable parties to assert the privacy risk locally. There are two possibilities: (1) measuring the information leakage through the local update (gradients sent to the server) (2) measuring the information leakage through the aggregate update (aggregate gradient every party obtains).
Besides, by using the results of the privacy meter, what can we do? One potential application is to use the results to determine how much noise the parties need to add to ensure differential privacy.
Beta Was this translation helpful? Give feedback.
All reactions