Please use this identifier to cite or link to this item:
Main Title: Violence detection in Hollywood movies by the fusion of visual and mid-level audio cues
Author(s): Acar, Esra
Hopfgartner, Frank
Albayrak, Sahin
Type: Conference Object
Language Code: en
Abstract: Detecting violent scenes in movies is an important video content understanding functionality e.g., for providing automated youth pro- tection services. One key issue in designing algorithms for violence detection is the choice of discriminative features. In this paper, we employ mid-level audio features and compare their discriminative power against low-level audio and visual features. We fuse these mid-level audio cues with low-level visual ones at the decision level in order to further improve the performance of violence detection. We use Mel-Frequency Cepstral Coefficients (MFCC) as audio and average motion as visual features. In order to learn a violence model, we choose two-class support vector machines (SVMs). Our experimental results on detecting violent video shots in Hollywood movies show that mid-level audio features are more discriminative and provide more precise results than low-level ones. The detection performance is further enhanced by fusing the mid-level audio cues with low-level visual ones using an SVM-based decision fusion.
Issue Date: 2013
Date Available: 17-Apr-2018
DDC Class: 000 Informatik, Informationswissenschaft, allgemeine Werke
Subject(s): algorithms
mel-frequency cepstral coefficients,
decision fusion
support vector machine
Proceedings Title: Proceedings of the 21st ACM international conference on Multimedia - MM ’13
Publisher: ACM
Publisher Place: New York, NY, USA
Volume: 2013
Publisher DOI: 10.1145/2502081.2502187
Page Start: 717
Page End: 720
ISBN: 978-1-4503-2404-5
Appears in Collections:FG Agententechnologien in betrieblichen Anwendungen und der Telekommunikation (AOT) » Publications

Files in This Item:
File Description SizeFormat 
2013_acar_etal.pdf1.01 MBAdobe PDFThumbnail

Items in DepositOnce are protected by copyright, with all rights reserved, unless otherwise indicated.