Please use this identifier to cite or link to this item: http://dx.doi.org/10.14279/depositonce-9715
For citation please use:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorStanev, Madeleine-
dc.contributor.authorRedlich, Johannes-
dc.contributor.authorKnörzer, Christian-
dc.contributor.authorRosenfeld, Ninett-
dc.contributor.authorLykartsis, Athanasios-
dc.date.accessioned2020-02-24T17:11:29Z-
dc.date.available2020-02-24T17:11:29Z-
dc.date.issued2016-
dc.identifier.issn2333-2042-
dc.identifier.urihttps://depositonce.tu-berlin.de/handle/11303/10820-
dc.identifier.urihttp://dx.doi.org/10.14279/depositonce-9715-
dc.description.abstractRhythm in speech and singing forms one of its basic acoustic components. Therefore, it is interesting to investigate the capability of subjects to distinguish between speech and singing when only the rhythm remains as an acoustic cue. For this study we developed a method to eliminate all linguistic components but rhythm from the speech and singing signals. The study was conducted online and participants could listen to the stimuli via loudspeakers or headphones. The analysis of the survey shows that people are able to significantly discriminate between speech and singing after they have been altered. Furthermore, our results reveal specific features, which supported participants in their decision, such as differences in regularity and tempo between singing and speech samples. The hypothesis that music trained people perform more successfully on the task was not proved. The results of the study are important for the understanding of the structure of and differences between speech and singing, for the use in further studies and for future application in the field of speech recognition.en
dc.language.isoenen
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subject.ddc780 Musikde
dc.subject.ddc620 Ingenieurwissenschaften und zugeordnete Tätigkeitende
dc.subject.otherspeech-music discriminationen
dc.subject.otherspeech perceptionen
dc.subject.otherspeech rhythmen
dc.subject.othercomputational paralinguisticsen
dc.titleSpeech and music discrimination: Human detection of differences between music and speech based on rhythmen
dc.typeConference Objecten
dc.relation.issupplementedby10.14279/depositonce-9530-
tub.accessrights.dnbfreeen
tub.publisher.universityorinstitutionTechnische Universität Berlinen
dc.type.versionupdatedVersionen
dcterms.bibliographicCitation.doi10.21437/SpeechProsody.2016-46en
dcterms.bibliographicCitation.editorBarnes, Jon-
dcterms.bibliographicCitation.editorBrugos, Alejna-
dcterms.bibliographicCitation.editorShattuck-Hufnagel, Stefanie-
dcterms.bibliographicCitation.editorVeilleux, Nanette-
dcterms.bibliographicCitation.proceedingstitleProc. Speech Prosody 2016 31 May - 3 Jun 2106, Boston, USAen
dcterms.bibliographicCitation.originalpublisherplace[s.l.]en
dcterms.bibliographicCitation.pageend226en
dcterms.bibliographicCitation.pagestart222en
dcterms.bibliographicCitation.originalpublishernameInternational Speech Communication Associationen
Appears in Collections:FG Audiokommunikation » Publications

Files in This Item:
stanev_etal_2016.pdf

Accepted manuscript

Format: Adobe PDF | Size: 856.23 kB
DownloadShow Preview
Thumbnail

Item Export Bar

Items in DepositOnce are protected by copyright, with all rights reserved, unless otherwise indicated.