top of page
검색
  • 작성자 사진RAMO

[FL-IJCAI'22 Best Student Paper Award]

최종 수정일: 2022년 7월 1일

Visual Transformer Meets CutMix for Improved Accuracy, Communication Efficiency, Data Privacy in SL


Authors : Sihun Baek, Jihong Park, Praneeth Vepacomma, Ramesh Raskar, Mehdi Bennis, Seong-Lyun Kim


This work is accepted to FL-IJCAI'22

This work has been awarded FL-IJCAI'22 Best Student Paper Award


This article seeks for a distributed learning solution for the visual transformer (ViT) architectures. Compared to convolutional neural network (CNN) architectures, ViTs often have larger model sizes and are computationally expensive, making federated learning (FL) ill-suited. Split learning (SL) can detour this problem by splitting a model and communicating the hidden representations at the split-layer, also known as smashed data. Notwithstanding, the smashed data of ViT are as large as and as similar as the input data, negating the communication efficiency of SL while violating data privacy.


To resolve these issues, we propose a new form of CutSmashed data by randomly punching and compressing the original smashed data, and develop a novel SL framework for ViT, coined CutMixSL. CutMixSL communicates CutSmashed data, thereby reducing communication costs and privacy leakage. Furthermore, CutMixSL inherently involves the CutMix data augmentation, improving accuracy and scalability. Simulations corroborate that CutMixSL outperforms other baselines including parallelized SL and SplitFed that integrates FL with SL.

댓글 0개

최근 게시물

전체 보기
bottom of page