Please use this identifier to cite or link to this item: http://dx.doi.org/10.14279/depositonce-5870
Main Title: Nonlinear spike-and-slab sparse coding for interpretable image encoding
Author(s): Shelton, Jacquelyn A.
Sheikh, Abdul-Saboor
Bornschein, Jörg
Sterne, Philip
Lücke, Jörg
Type: Article
Language Code: en
Is Part Of: http://dx.doi.org/10.14279/depositonce-5758
Abstract: Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
URI: http://depositonce.tu-berlin.de/handle/11303/6314
http://dx.doi.org/10.14279/depositonce-5870
Issue Date: 2015
Date Available: 24-Apr-2017
DDC Class: DDC::000 Informatik, Informationswissenschaft, allgemeine Werke::000 Informatik, Wissen, Systeme::000 Informatik, Informationswissenschaft, allgemeine Werke
Subject(s): imaging techniques
optimization
coding mechanisms
Gaussian noise
in vivo imaging
learning
learning curves
algorithms
Creative Commons License: https://creativecommons.org/licenses/by/4.0/
Journal Title: PLOS one
Publisher: Public Library of Science
Publisher Place: Lawrence, Kan.
Volume: 10
Issue: 5
Article Number: e0124088
Publisher DOI: 10.1371/journal.pone.0124088
Page Start: 1
Page End: 25
ISSN: 1932-6203
Appears in Collections:Technische Universität Berlin » Fakultäten & Zentralinstitute » Fakultät 4 Elektrotechnik und Informatik » Institut für Softwaretechnik und Theoretische Informatik » Publications

Files in This Item:
File Description SizeFormat 
2015_sheikh_et-al.pdf3.69 MBAdobe PDFThumbnail
View/Open


Items in DepositOnce are protected by copyright, with all rights reserved, unless otherwise indicated.