Some results suggest that this recognition is affected by the experience level of the individual with the vocalizing species (e.g. Humans are also able to use these acoustic features to assess the inner state and decipher the contexts of non-human vocalizations (calls of macaques: pigs: dogs: ). ground squirrels mongooses sifaka and lemur ) or distress vocalizations. Indeed, numerous studies have found several examples of adequate reactions to heterospecific alarm calls (e.g. Moreover, our recent fMRI study showed that in dogs and humans, similar brain regions are involved in processing the emotional load of non-verbal vocal expressions, suggesting that the neurological processes of extracting emotional information from the acoustic structure of calls is shared among mammals.īased on this we can assume that acoustic emotion recognition can work not only within species, but also in interspecific communication. found that the same brain centres are responsible for the processing of animal (cat and rhesus macaque) and human non-verbal vocalizations with a negative valence. Ī growing body of evidence suggests that in humans, specific brain regions are involved in processing these emotion-expressing vocalizations that are different from those that are responsible for speech perception. Of these prominent frequency band position and distribution across the spectrum depends mainly on the length of the vocal tract, thus the so-called formant dispersion acts like an important indexical cue in communication. On the other hand, they can be filter-related due to the modification of the length or shape of the vocal tract affecting the spectral energy distribution in the sound, creating, for example, formant frequencies. On the one hand, these parameters can be source-related when the respiration or the phonation system is affected, causing changes in the amplitude, the call duration and the fundamental frequency. In short, the specific changes in the brain due to emotional states can affect the neural control over the muscle movements involved in voice production in the larynx and the vocal tract, and these changes modify certain acoustic parameters of the produced calls. ![]() ![]() The way in which emotions are reflected in the acoustic structure of calls is best described by the Source-Filter Framework (for detailed review see ). Our results indicate that dogs may communicate honestly their size and inner state in a serious contest situation, while manipulatively in more uncertain defensive and playful contexts.ĭuring social interactions, both humans and non-human animals use various communicative signals to express their inner states. ![]() Moreover, women and participants experienced with dogs scored higher in this task. Participants associated the correct contexts with the growls above chance. Within threatening and playful contexts, bouts with shorter, slower pulsing growls and showing smaller apparent body size were rated to be less aggressive and fearful, but more playful and happy. Listeners attributed emotions to growls according to their social contexts. To resolve this contradiction, we played back natural growl bouts from three social contexts (food guarding, threatening and playing) to humans, who had to rate the emotional load and guess the context of the playbacks. In contrast, humans were found to be unable to differentiate between playful and threatening growls, probably because single growls' aggression level was assessed based on acoustic size cues. ![]() Humans use these structural rules to attribute emotions to dog vocalizations, especially to barks, which match with their contexts. Vocal expressions of emotions follow simple rules to encode the inner state of the caller into acoustic parameters, not just within species, but also in cross-species communication.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |