Nonlinear spike-and-slab sparse coding for interpretable image encoding

dc.contributor.authorShelton, Jacquelyn A.
dc.contributor.authorSheikh, Abdul-Saboor
dc.contributor.authorBornschein, Jörg
dc.contributor.authorSterne, Philip
dc.contributor.authorLücke, Jörg
dc.date.accessioned2017-04-24T13:01:54Z
dc.date.available2017-04-24T13:01:54Z
dc.date.issued2015
dc.description.abstractSparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.en
dc.identifier.issn1932-6203
dc.identifier.urihttps://depositonce.tu-berlin.de/handle/11303/6314
dc.identifier.urihttp://dx.doi.org/10.14279/depositonce-5870
dc.language.isoenen
dc.relation.ispartofhttp://dx.doi.org/10.14279/depositonce-5758
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en
dc.subject.ddc000 Informatik, Informationswissenschaft, allgemeine Werkede
dc.subject.otherimaging techniquesen
dc.subject.otheroptimizationen
dc.subject.othercoding mechanismsen
dc.subject.otherGaussian noiseen
dc.subject.otherin vivo imagingen
dc.subject.otherlearningen
dc.subject.otherlearning curvesen
dc.subject.otheralgorithmsen
dc.titleNonlinear spike-and-slab sparse coding for interpretable image encodingen
dc.typeArticleen
dc.type.versionpublishedVersionen
dcterms.bibliographicCitation.articlenumbere0124088en
dcterms.bibliographicCitation.doi10.1371/journal.pone.0124088en
dcterms.bibliographicCitation.issue5en
dcterms.bibliographicCitation.journaltitlePLOS oneen
dcterms.bibliographicCitation.originalpublishernamePublic Library of Scienceen
dcterms.bibliographicCitation.originalpublisherplaceLawrence, Kan.en
dcterms.bibliographicCitation.pageend25en
dcterms.bibliographicCitation.pagestart1en
dcterms.bibliographicCitation.volume10en
tub.accessrights.dnbfreeen
tub.affiliationFak. 4 Elektrotechnik und Informatik::Inst. Softwaretechnik und Theoretische Informatikde
tub.affiliation.facultyFak. 4 Elektrotechnik und Informatikde
tub.affiliation.instituteInst. Softwaretechnik und Theoretische Informatikde
tub.publisher.universityorinstitutionTechnische Universität Berlinen

Files

Original bundle
Now showing 1 - 1 of 1
Loading…
Thumbnail Image
Name:
2015_sheikh_et-al.pdf
Size:
3.61 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
5.75 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections