Loading…
AES Show 2024 NY has ended
Exhibits+ badges provide access to the ADAM Audio Immersive Room, the Genelec Immersive Room, Tech Tours, and the presentations on the Main Stage.

All Access badges provide access to all content in the Program (Tech Tours still require registration)

View the Exhibit Floor Plan.
Tuesday October 8, 2024 11:30am - 11:50am EDT
The study aims to enhance the understanding of how Head Tracked Spatial Audio technology influences both emotional responses and immersion levels among listeners. By employing micro facial gesture recognition technology, it quantifies the depth of immersion and the intensity of emotional responses elicited by various types of binaural content, measuring categories such as Neutral, Happy, Sad, Angry, Surprised, Scared, Disgusted, Contempt, Valence, and Arousal. Subjects were presented with a randomized set of audio stimuli consisting of stereo music, stereo speech, and 5.1 movie content. Each audio piece lasted 15 seconds, and the Spatial Audio processing was On or Off randomly throughout the experiment. The FaceReader software was detecting the facial microexpressions of the subjects constantly. Statistical analysis was conducted using R software, applying Granger causality tests in time series, T-tests, and the P-value criterion for hypothesis validation. After consolidating the records of 78 participants, the final database consisted of 212,862 unique data points. With a 95% confidence, it was determined that the average level of "Arousal" is significantly higher when Head Tracked Spatial Audio is activated compared to when it is deactivated, suggesting that HT technology increases the emotional arousal of audio listeners. Regarding the happiness reaction, the highest levels were recorded in mode 5 (HT on and Voice) with an average of 0.038, while the lowest levels were detected in mode 6 (HT off and Voice). Preliminary conclusions indicate that surprise effectively causes a decrease in neutrality, supporting the dynamic interaction between these emotional variables.
Moderators
avatar for Agnieszka Roginska

Agnieszka Roginska

Professor, New York University
Agnieszka Roginska is a Professor of Music Technology at New York University. She conducts research in the simulation and applications of immersive and 3D audio including the capture, analysis and synthesis of auditory environments, auditory displays and applications in augmented... Read More →
Speakers Authors
Tuesday October 8, 2024 11:30am - 11:50am EDT
1E03

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link