Unsupervised segmentation of focused regions in images with low depth of field

G. Rafiee, S. S. Dlay, W. L. Woo

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

Unsupervised extraction of focused regions from images with low depth-of-field (DOF) is a problem without an efficient solution yet. In this paper, we propose an efficient unsupervised segmentation solution for this problem. The proposed approach which is based on ensemble clustering and graph-cut modeling aims to extract meaningful focused regions from a given image at two stages. In the first stage, a novel two-level based ensemble clustering technique is developed to classify image blocks into three constituent classes. As a result, object and background blocks are extracted. By considering certain pixels of object and background blocks as seeds, a constraint is provided for the next stage of the approach. In stage two, a minimal graph cuts is constructed by utilizing the max-flow method and using object and background seeds. Experimental results demonstrate that the proposed approach achieves an average F-measure of 91.7% and is computationally up to 2 times faster than existing unsupervised approaches.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Multimedia and Expo, ICME 2013
PublisherIEEE
ISBN (Electronic)9781479900152
DOIs
Publication statusPublished - 26 Sep 2013
Event2013 IEEE International Conference on Multimedia and Expo, ICME 2013 - San Jose, CA, United States
Duration: 15 Jul 201319 Jul 2013

Publication series

NameProceedings of ICME
PublisherIEEE
ISSN (Print)1945-7871
ISSN (Electronic)1945-788X

Conference

Conference2013 IEEE International Conference on Multimedia and Expo, ICME 2013
CountryUnited States
CitySan Jose, CA
Period15/07/1319/07/13

Fingerprint Dive into the research topics of 'Unsupervised segmentation of focused regions in images with low depth of field'. Together they form a unique fingerprint.

Cite this