Paper ID | SS-15.2 | ||
Paper Title | ALLOCATING DNN LAYERS COMPUTATION BETWEEN FRONT-END DEVICES AND THE CLOUD SERVER FOR VIDEO BIG DATA PROCESSING | ||
Authors | Peiyin Xing, Xiaofei Liu, Peixi Peng, Tiejun Huang, Yonghong Tian, Peking University, China | ||
Session | SS-15: Signal Processing for Collaborative Intelligence | ||
Location | Gather.Town | ||
Session Time: | Friday, 11 June, 13:00 - 13:45 | ||
Presentation Time: | Friday, 11 June, 13:00 - 13:45 | ||
Presentation | Poster | ||
Topic | Special Sessions: Signal Processing for Collaborative Intelligence | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | With the development of intelligent hardware, front-end devices can also perform DNN computation. Moreover, the deep neural network can be divided into several layers. In this way, part of the computation of DNN models can be migrated to the front-end devices, which can alleviate the cloud burden and shorten the processing latency. This paper proposes a computation allocation algorithm of DNN between the front-end devices and the cloud server. In brief, we divide the DNN layers dynamically according to the current and the predicted future status of the processing system, by which we obtain a shorter end-to-end latency. The simulation results reveal that the overall latency reduction is more than 70% compared with traditional cloud-centered processing. |