top of page
검색
작성자 사진RAMO

Mix2SFL: Two-Way Mixup for Scalable, Accurate, andCommunication-Efficient Split Federated Learning

In recent years, split learning (SL) has emerged as a promising distributed learning framework that can utilize big data in parallel without privacy leakage while reducing client-side computing resources. In the initial implementation of SL, however, the server serves multiple clients sequentially incurring high latency. Parallel implementation of SL can alleviate this latency problem, but existing Parallel SL algorithms compromise scalability due to its fundamental structural problem. To this end, our previous works have proposed two scalable Parallel SL algorithms, dubbed SGLR and LocFedMix-SL, by solving the aforementioned fundamental problem of the Parallel SL structure. In this article, we propose a novel Parallel SL framework, coined Mix2SFL, that can ameliorate both accuracy and communication-efficiency while still ensuring scalability. Mix2SFL first supplies more samples to the server through a manifold mixup between the smashed data uploaded to the server as in SmashMix of LocFedMix-SL, and then averages the split-layer gradient as in GradMix of SGLR, followed by local model aggregation as in SFL. Numerical evaluation corroborates that Mix2SFL achieves improved performance in both accuracy and latency compared to the state-of-the-art SL algorithm with scalability







Full Paper: S. Oh, H. Nam, J. Park, P. Vepakomma, R. Raskar, M. Bennis, and S.-L. Kim, ''Mix2SFL: Two-Way Mixup for Scalable, Accurate, and Communication-Efficient Split Federated Learning," accepted to IEEE Transactions on Big Data

댓글 0개

최근 게시물

전체 보기

[RAMO] 2024년도 한-EU 6G 과제 최종 선정

RAMO 연구실이 2024년도 한-EU 6G 과제에 최종 선정되었다. 본 과제는 연세대학교가 주관으로 연구하며 국내 및 유럽 기관들과 공동으로 연구에 진행한다. 국내에서는 서울대학교, 고려대학교, 한국전자통신연구원(ETRI), 엘지전자(주)가...

留言


bottom of page