This special session explores federated machine learning in increasingly complex distributed environments. As data becomes distributed among numerous users and cannot be centrally collected due to privacy constraints, new challenges emerge. These include learning in heterogeneous systems, effective communication for model updates, and representation of distributed models. We focus on federated learning variants, model compression, adaptive aggregation techniques, and privacy preservation approaches including differential privacy.
The session encourages research on deep learning applications in federated contexts, particularly in IoT, recommendation systems, medicine, automotive, sensor networks, and text processing, with emphasis on balancing privacy, efficiency, and performance.