Automatic Detection and Classification of Audio Events for Road Surveillance Applications

Noor Almaadeed, Muhammad Asim, Somaya Al-maadeed, Ahmed Bouridane, Azeddine Beghdadi

Research output: Contribution to journalArticlepeer-review

52 Citations (Scopus)
11 Downloads (Pure)

Abstract

This work investigates the problem of detecting hazardous events on roads by designing an audio surveillance system that automatically detects perilous situations such as car crashes and tire skidding. In recent years, research has shown several visual surveillance systems that have been proposed for road monitoring to detect accidents with an aim to improve safety procedures in emergency cases. However, the visual information alone cannot detect certain events such as car crashes and tire skidding, especially under adverse and visually cluttered weather conditions such as snowfall, rain, and fog. Consequently, the incorporation of microphones and audio event detectors based on audio processing can significantly enhance the detection accuracy of such surveillance systems. This paper proposes to combine time-domain, frequency-domain, and joint time-frequency features extracted from a class of quadratic time-frequency distributions (QTFDs) to detect events on roads through audio analysis and processing. Experiments were carried out using a publicly available dataset. The experimental results conform the effectiveness of the proposed approach for detecting hazardous events on roads as demonstrated by 7% improvement of accuracy rate when compared against methods that use individual temporal and spectral features.
Original languageEnglish
Article number1858
Number of pages19
JournalSensors
Volume18
Issue number6
DOIs
Publication statusPublished - 6 Jun 2018

Keywords

  • event detection
  • visual surveillance
  • tire skidding
  • car crashes
  • hazardous events

Fingerprint

Dive into the research topics of 'Automatic Detection and Classification of Audio Events for Road Surveillance Applications'. Together they form a unique fingerprint.

Cite this