Please use this identifier to cite or link to this item:
Main Title: BOiS—Berlin Object in Scene Database: Controlled Photographic Images for Visual Search Experiments with Quantified Contextual Priors
Author(s): Mohr, Johannes
Seyfarth, Julia
Lueschow, Andreas
Weber, Joachim E.
Wichmann, Felix A.
Obermayer, Klaus
Type: Article
Language Code: en
Abstract: Photographic stimuli are often used for studying human perception. To faithfully represent our natural viewing environment, these stimuli should be free of potential artifacts. If stimulus material for scientific experiments is generated from photographs that were created for a different purpose, such as advertisement or art, the scene layout and focal depth might not be typical for our visual world. For instance in advertising photos, particular objects are often centered and focused. In visual search experiments, this can lead to the so-called central viewing bias and an unwanted pre-segmentation of focused objects (Wichmann et al., 2010). Also the photographic process itself can result in artifacts, such as optical, color and geometric distortions, or introduce noise. Furthermore, some image compression methods introduce artifacts that may influence human viewing behavior. In some studies, objects are pasted into scenes using graphics editing. In this case inconsistencies in color, shading or lighting between the object and the local scene background could lead to deviations from natural viewing behavior. In order to meet the needs for publicly available stimulus material in which these artifacts are avoided, we introduce in this paper the BOiS—Berlin Object in Scene database, which provides controlled photographic stimulus material for the assessment of human visual search behavior under natural conditions. The BOiS database comprises high-resolution photographs of 130 cluttered scenes. In each scene, one particular object was chosen as search target. The scene was then photographed three times: with the target object at an expected location, at an unexpected location, or absent. Moreover, the database contains 240 different views of each target object in front of a black background. These images provide different visual cues of the target before the search is initiated. All photos were taken under controlled conditions with respect to photographic parameters and layout and were corrected for optical distortions. The BOiS database allows investigating the top-down influence of scene context, by providing contextual prior maps of each scene that quantify people's expectations to find the target object at a particular location. These maps were obtained by averaging the individual expectations of 10 subjects and can be used to model context effects on the search process. Last not least, the database includes segmentation masks of each target object in the two corresponding scene images, as well as a list of semantic information on the target object, the scene, and the two chosen locations. Moreover, we provide bottom-up saliency measures and contextual prior values at the two target object locations. While originally aimed at visual search, our database can also provide stimuli for experiments on scene viewing and object recognition, or serve as test environment for computer vision algorithms.
Issue Date: 23-May-2016
Date Available: 6-Nov-2019
DDC Class: 150 Psychologie
004 Datenverarbeitung; Informatik
Subject(s): photo database
visual search
contextual prior
realistic scenes
target object
Sponsor/Funder: BMBF, 01GQ0850, Bernstein Fokus Neurotechnologie - Nichtinvasive Neurotechnologie für Mensch-Maschine Interaktion
Journal Title: Frontiers in Psychology
Publisher: Frontiers Media S.A.
Publisher Place: Lausanne
Volume: 7
Article Number: 749
Publisher DOI: 10.3389/fpsyg.2016.00749
EISSN: 1664-1078
Appears in Collections:FG Neuronale Informationsverarbeitung » Publications

Files in This Item:
File Description SizeFormat 
fpsyg-07-00749.pdf3.58 MBAdobe PDFThumbnail
fpsyg-07-00749-g0001.tif3.7 MBTIFFThumbnail
fpsyg-07-00749-g0002.tif2 MBTIFFThumbnail

This item is licensed under a Creative Commons License Creative Commons