Show simple item record

dc.contributor.authorVestman, Ville
dc.contributor.authorGowda, Dhananjaya
dc.contributor.authorSahidullah, Md
dc.contributor.authorAlku, Paavo
dc.contributor.authorKinnunen, Tomi
dc.date.accessioned2018-03-21T13:44:22Z
dc.date.available2018-03-21T13:44:22Z
dc.date.issued2018
dc.identifier.urihttps://erepo.uef.fi/handle/123456789/6199
dc.description.abstractFrom the available biometric technologies, automatic speaker recognition is one of the most convenient and accessible ones due to abundance of mobile devices equipped with a microphone, allowing users to be authenticated across multiple environments and devices. Speaker recognition also finds use in forensics and surveillance. Due to the acoustic mismatch induced by varied environments and devices of the same speaker, leading to increased number of identification errors, much of the research focuses on compensating for such technology-induced variations, especially using machine learning at the statistical back-end. Another much less studied but at least as detrimental source of acoustic variation, however, arises from mismatched speaking styles induced by the speaker, leading to a substantial performance drop in recognition accuracy. This is a major problem especially in forensics where perpetrators may purposefully disguise their identity by varying their speaking style. We focus on one of the most commonly used ways of disguising one’s speaker identity, namely, whispering. We approach the problem of normal-whisper acoustic mismatch compensation from the viewpoint of robust feature extraction. Since whispered speech is intelligible, yet a low-intensity signal and therefore prone to extrinsic distortions, we take advantage of robust, long-term speech analysis methods that utilize slow articulatory movements in speech production. In specific, we address the problem using a novel method, frequency-domain linear prediction with time-varying linear prediction (FDLP-TVLP), which is an extension of the 2-dimensional autoregressive (2DAR) model that allows vocal tract filter parameters to be time-varying, rather than piecewise constant as in classic short-term speech analysis. Our speaker recognition experiments on the whisper subset of the CHAINS corpus indicate that when tested in normal-whisper mismatched conditions, the proposed FDLP-TVLP features improve speaker recognition performance by 7–10% over standard MFCC features in relative terms. We further observe that the proposed FDLP-TVLP features perform better than the FDLP and 2DAR methods for whispered speech. .
dc.language.isoEN
dc.publisherElsevier BV
dc.relation.ispartofseriesSpeech Communication
dc.relation.urihttp://dx.doi.org/10.1016/j.specom.2018.02.009
dc.rightsCC BY-NC-ND https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectspeaker recognition
dc.subjectspeaking style mismatch
dc.subjectdisguise
dc.subjectwhisper
dc.subject2-Dimensional autoregression (2D-AR)
dc.subjecttime-varying linear prediction (TVLP)
dc.titleSpeaker recognition from whispered speech: a tutorial survey and an application of time-varying linear prediction
dc.description.versionfinal draft
dc.contributor.departmentSchool of Computing, activities
uef.solecris.id53271147en
dc.type.publicationTieteelliset aikakauslehtiartikkelit
dc.rights.accessrights© Elsevier B.V.
dc.relation.doi10.1016/j.specom.2018.02.009
dc.description.reviewstatuspeerReviewed
dc.format.pagerange62-79
dc.relation.issn0167-6393
dc.relation.volume99
dc.rights.accesslevelopenAccess
dc.type.okmA1
uef.solecris.openaccessEi


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record