2. Methods- ShortTerm• 2.1ActorsandStimulio Participants:43mothers,(24withsomemusicexperience,6with
someactingexperience)o Scripts:8conversationalsentencesinEnglish:
o Emotions:Anger,Happiness,Sadness,Neutral (noexpression)o Subjectswererecordedspeakingeachscriptwhilevocally
portrayingeachuniqueemotion.• 8scriptsX4portrayals=32stimulipersubject
• 2.2DataAnalysiso AllanalysesconductedinMatLabo Pitch(F0)contoursandHarmonics-to-NoiseRatio(HNR)
calculatedthroughPRAAT.o Stimulinormalizedto10secondsinlength.
1. “Whereisthecheckbook?Itsgone,Ican’tfindit.”
8. “I’mfixingdinner.Itwilltakeanhour.”
… …
1. IntroductionParentalvocalizedemotionsalertchildrentostoporchangetheirbehavior[1].Excessiveexposurespecificallytoangercanalterchildren’sneurophysiology,andincreasestheriskforemotionalproblems[2] .
AcousticPropertiesofMaternalEmotionalSpeech
PeterM.Moriarty,MichelleC.VigeantTheGraduatePrograminAcoustics,ThePennsylvaniaStateUniversity,UniversityPark,PA
5.ConclusionsandFutureWork• 5.1AcousticalAnalysiso OveralltrendsintheLLDssupportfeasibilityofdiscriminatingemotion
basedonmeasureddata.o Differencesbetweenscriptindicateanaffectbylanguage.§ 5.2FutureWorko Includeacalibratedmeasureofloudnessfromthestimulirecordings.o Performutterance-levelanalysis,inadditiontoscriptlevelanalysis.o PerformaTukeymeanscomparisonandrepeatedmeasuresANOVAtest
todeterminewhichLLDsdifferacrossemotion.o Runanexperimenttosubjectivelyevaluatethequalityoftheemotion,
andtestthesubjectiveeffectsofthelanguagecontent.
3.PLAYBACK&SCAN
4.BRAINACTIVITY
LongTermGoals:• Createcorpusofemotionalspeech
recordings.• Examinehowemotionalspeechfrom
mothersisprocessedbychildrenattheneurologicallevel
• Compareacousticsandchild’sbrainactivity.
1.RECORDSTIMULI
2.ACOUSTICANALYSIS
ShortTermGoals:• Recordmothers
speakingwithdifferentactedemotions
• Determinesalientspeechpatternsmodulatedbyemotion
CustomMatLab Script
LinuxCommands
PRAATcalculations
6.AcknowledgementsThankyoutoMartinLawlessandMatthewNealforassistanceinanalysis.WorkfundedbytheNationalInstituteofHealth(NIH)Grant#1R21MH104547-01
5.SUBJECTIVERATINGSHowangry doesthissound?
q 1– Notangryq 2– Lessangryq 3– Moreangryq 4– VeryangryX
Figure3.Waveforms(yellow)andF0contours(green)superimposedoverspectrogramsofthesamescriptspokenwithfourdifferentemotions.NotethedifferenceinwaveformamplitudeandF0contourrange.
Waveform
F0contour
Figure1.Imagefrom[3].
Figure2.Diagramoftheworkflowofthestudy.TopImagefrom[4].Bottomrightimagesfrom[5],[6].
7.References[1]Repacholi,B.M.,&Meltzoff,A.N.ChildDevelopment,vol.78,pp.508-521(2007).[2]Shackman,J.E.,Shackman,A.J.,&Pollak,S.D..Emotion,vol.7,pp.838–852(2007).[3]http://longbeachchildcustodyattorney.com/considerations-for-child-custody/[4]http://www.amberusa.com/img/equipment-mri/siemens-magnetom-aera-1-5t-full.png[5]mathworks.com[6]praat.org[7]Schuller,B.etal.,SpeechCommunication,vol.53,no.1,pp.1062-1087(2011).[8]Banse,R.,Scherer,K.,J.ofPersonalityandSocialPsychology.vol.70,no.3,pp.617(1996)[9]Juslin,P.N.,andLaukka,P.,Emotion,vol.1,no.4,pp.381-412(2001)[10]Scherer,K.R.etal.,ComputerSpeechandLanguage,vol.29,no.1,pp218-235(2015)
4.Results- Summary• 4.2DifferencesinLLDsacrossemotiono TrendsinF0MeanandSTDagreewithliterature[7],[8].o Jitter,SyllabicRate,andHNRagreewith[10]o Hammarberg agreeswith[8],HF500with[9]o 1- wayANOVAtestgroupedbyemotionsignificantp<.001forallLLDs
Figure4.ThisisaplotofthemeanofeachLLDtakenoversubject,normalizedtozeromeanandunitvariance.Barsaregroupedbyemotionlabeledonthehorizontalaxis.Differencesbetweenvalueswithinemotioncategorysuggestanaffectcausedbythelanguagecontentofthescript.
Figure5.ThisisaplotofthemeanvaluesofeachLLDtakenoverscriptandsubject.4.Results- ScriptComparison• 4.1Low-LevelDescriptors(LLD)Consideredo F0:pitchorglottalpulserateofvoicedsections[Hz]o Jitter:averagedifferenceinglottalperiods[s]o Shimmer:averageshort-timedifferenceinintensity[dB]o HNR:Ratioofperiodicenergytoenergyofnoise[dB]
o SyllabicRate:Syllablespersecond[S/s]o Hammarberg Index:Peakenergy(0-2kHz)/(2kHz-5kHz)[dB]o HF500:(Energy>500Hz)/(Energy<500Hz)o SpectralCOG[Hz]:Weightedmeanfrequencyofspectrum.
“chk-b”