Please use this identifier to cite or link to this item:
Main Title: Nonlinear spike-and-slab sparse coding for interpretable image encoding
Author(s): Shelton, Jacquelyn A.
Sheikh, Abdul-Saboor
Bornschein, Jörg
Sterne, Philip
Lücke, Jörg
Type: Article
Language Code: en
Is Part Of:
Abstract: Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Issue Date: 2015
Date Available: 24-Apr-2017
DDC Class: 000 Informatik, Informationswissenschaft, allgemeine Werke
Subject(s): imaging techniques
coding mechanisms
Gaussian noise
in vivo imaging
learning curves
Journal Title: PLOS one
Publisher: Public Library of Science
Publisher Place: Lawrence, Kan.
Volume: 10
Issue: 5
Article Number: e0124088
Publisher DOI: 10.1371/journal.pone.0124088
Page Start: 1
Page End: 25
ISSN: 1932-6203
Appears in Collections:Inst. Softwaretechnik und Theoretische Informatik » Publications

Files in This Item:
File Description SizeFormat 
2015_sheikh_et-al.pdf3.69 MBAdobe PDFThumbnail

This item is licensed under a Creative Commons License Creative Commons