DBGC: Dimension-Based Generic Convolution Block for Object Recognition

Chirag Patel, Dulari Bhatt, Urvashi Sharma, Radhika Patel, Sharnil Pandya, Kirit Modi, Nagaraj Cholli, Akash Patel, Urvi Bhatt, Muhammad Ahmed Khan, Shubhankar Majumdar, Mohd Zuhair, Khushi Patel, Syed Aziz Shah, Hemant Ghayvat*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

39 Citations (Scopus)

Abstract

The object recognition concept is being widely used a result of increasing CCTV surveillance and the need for automatic object or activity detection from images or video. Increases in the use of various sensor networks have also raised the need of lightweight process frameworks. Much research has been carried out in this area, but the research scope is colossal as it deals with open-ended problems such as being able to achieve high accuracy in little time using lightweight process frameworks. Convolution Neural Networks and their variants are widely used in various computer vision activities, but most of the architectures of CNN are application-specific. There is always a need for generic architectures with better performance. This paper introduces the Dimension-Based Generic Convolution Block (DBGC), which can be used with any CNN to make the architecture generic and provide a dimension-wise selection of various height, width, and depth kernels. This single unit which uses the separable convolution concept provides multiple combinations using various dimension-based kernels. This single unit can be used for height-based, width-based, or depth-based dimensions; the same unit can even be used for height and width, width and depth, and depth and height dimensions. It can also be used for combinations involving all three dimensions of height, width, and depth. The main novelty of DBGC lies in the dimension selector block included in the proposed architecture. Proposed unoptimized kernel dimensions reduce FLOPs by around one third and also reduce the accuracy by around one half; semi-optimized kernel dimensions yield almost the same or higher accuracy with half the FLOPs of the original architecture, while optimized kernel dimensions provide 5 to 6% higher accuracy with around a 10 M reduction in FLOPs.

Original languageEnglish
Article number1780
Number of pages25
JournalSensors
Volume22
Issue number5
DOIs
Publication statusPublished - 24 Feb 2022
Externally publishedYes

Keywords

  • CNN
  • DBGC
  • Dimension-based kernels
  • Separable convolution

Cite this