The right to be forgotten in Federated Learning: An efficient Realization with Rapid Retraining.
Train a new model from scratch
Not practical nor economical for real-world models
Challenging for central server to erase the contribution of clients to the model (clients always shares the knowledge learned from the local dataset with other clients via aggregation).
All subsequent model updates will implicitly relate to the model updates of these clients.
How to design a rapid retraining approach.
2
Mitigating Poor Data Quality Impact with Federated Unlearning for Human-Centric Metaverse
Server-side Federated Unlearning method.
Low throughput FL
Loss-based model quality assessment.
Non-communicative FUL
3
Federated Unlearning with Momentum Degradation
Divide unlearning into two steps: 1) knowledge erasure (based on MoDe) and memory guidance (to fine-tune) → 2 types of unlearning: client revocation and category removal.
4
Toward Efficient and Robust Federated Unlearning in IoT Networks
Simultaneous presence of multiple tampered devices or varying data quality must be taken into account.
How to improve the efficiency and robustness of FU in IoT networks?
Unreliable updates from malicious clients lead to a deviation from FL algorithms → inconsistent performance across rounds.
Every few rounds → attack detection by examining the performance difference between the current global model with that of previous rounds.
5
Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation
Pruning-based scheme not only removes the information of the unlearning classes but also erases the information of some remaining data at the same time → degradation.
Cached gradients are not updated with the unlearning process → mitigate the unlearning process.
FUL is more randomness compared to CMU.
Purpose of unlearning → remove the effect of certain specific samples from the model / not all datasets → find partial parametersmost affect the samples.
On-server data training with some data samples.
6
Fast Model Update for IoT Traffic Anomaly Detection with Machine Unlearning
Estimate the unlearning belief values of training samples (likelihood that a sample in a application context is to be unlearned in the future).
divide the training samples into different groups based on the learning belief values.
7
Federated Unlearning via Class-Discriminative Pruning
Selectively forget categories in FL
TF-IDF that consider the channels’ to class-discrimination.
8
Communication Efficient and Provable Federated Unlearning
Communication efficiency.
Data availability.
Requires provable guarantees.
General framework for communication efficient and provable federated unlearning.
Adjust the number of clients sampled per round and the mini-batch per iteration.
9
Asynchronous Federated Unlearning
There are no rows in this table
Want to print your doc? This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (