top of page
Parallel & Distributed Computing

Our research focuses on addressing challenges in deep learning frameworks, specifically in the areas of efficiency, scalability, privacy, and energy consumption. Our work aims to develop new solutions that optimize deep learning processes in the context of parallel and distributed computing. By leveraging techniques such as split learning, federated learning, data augmentation, and feature-based offloading, we strive to improve communication efficiency, data privacy, convergence speed, and energy conservation for deep learning models. We try to make deep learning services and AI quality control more accessible and effective on mobile devices and edge networks, enabling a wide range of applications and better user experiences.

 

LocFedMix-SL

CutMixSL

One of our key contributions is the development of LocFedMix-SL, a scalable parallel split learning (SL) framework that integrates local parallelism, federated learning, and mixup augmentation techniques. This novel approach addresses the bottlenecks in existing parallel SL algorithms, resulting in improved scalability, convergence speed, and latency compared to other state-of-the-art parallel SL methods. Additionally, we introduce CutMixSL, a new framework that optimizes visual transformer (ViT) architectures for distributed learning. By incorporating a unique form of CutSmashed data, CutMixSL achieves better communication efficiency, data privacy, and overall performance compared to existing parallelized SL solutions.

 

bottom of page