VaBUS: Edge-Cloud Real-time Video Analytics via Background Understanding and Subtraction

Hanling Wang, Qing Li, Heyang Sun, Zuozhou Chen, Yingqian Hao, Junkun Peng, Zhenhui Yuan, Junsheng Fu, Yong Jiang

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)
18 Downloads (Pure)


Edge-cloud collaborative video analytics is transforming the way data is being handled, processed, and transmitted from the ever-growing number of surveillance cameras around the world. To avoid wasting limited bandwidth on unrelated content transmission, existing video analytics solutions usually perform temporal or spatial filtering to realize aggressive compression of irrelevant pixels. However, most of them work in a context-agnostic way while being oblivious to the circumstances where the video content is happening and the context-dependent characteristics under the hood. In this work, we propose VaBUS, a real-time video analytics system that leverages the rich contextual information of surveillance cameras to reduce bandwidth consumption for semantic compression. As a task-oriented communication system, VaBUS dynamically maintains the background image of the video on the edge with minimal system overhead and sends only highly confident Region of Interests (RoIs) to the cloud through adaptive weighting and encoding. With a lightweight experience-driven learning module, VaBUS is able to achieve high offline inference accuracy even when network congestion occurs. Experimental results show that VaBUS reduces bandwidth consumption by 25.0%-76.9% while achieving 90.7% accuracy for both the object detection and human keypoint detection tasks.

Original languageEnglish
Pages (from-to)90-106
Number of pages17
JournalIEEE Journal on Selected Areas in Communications
Issue number1
Early online date16 Nov 2022
Publication statusPublished - 1 Jan 2023


Dive into the research topics of 'VaBUS: Edge-Cloud Real-time Video Analytics via Background Understanding and Subtraction'. Together they form a unique fingerprint.

Cite this