Paper ID | SS-15.3 |
Paper Title |
Branchy-GNN: a Device-Edge Co-Inference Framework for Efficient Point Cloud Processing |
Authors |
Jiawei Shao, The Hong Kong Polytechnic University, Hong Kong SAR China; Haowei Zhang, The Hong Kong University of Science and Technology, Hong Kong SAR China; Yuyi Mao, Jun Zhang, The Hong Kong Polytechnic University, Hong Kong SAR China |
Session | SS-15: Signal Processing for Collaborative Intelligence |
Location | Gather.Town |
Session Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Special Sessions: Signal Processing for Collaborative Intelligence |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
The recent advancements of three-dimensional (3D) data acquisition devices have spurred a new breed of applications that rely on point cloud data processing. However, processing a large volume of point cloud data brings a significant workload on resource-constrained mobile devices, prohibiting from unleashing their full potentials. Built upon the emerging paradigm of device-edge co-inference, where an edge device extracts and transmits the intermediate feature to an edge server for further processing, we propose Branchy-GNN for efficient graph neural network (GNN) based point cloud processing by leveraging edge computing platforms. In order to reduce the on-device computational cost, the Branchy-GNN adds branch networks for early exiting. Besides, it employs learning-based joint source-channel coding (JSCC) for the intermediate feature compression to reduce the communication overhead. Our experimental results demonstrate that the proposed Branchy-GNN secures a significant latency reduction compared with several benchmark methods. |