Loading…
AES Show 2024 NY has ended
Exhibits+ badges provide access to the ADAM Audio Immersive Room, the Genelec Immersive Room, Tech Tours, and the presentations on the Main Stage.

All Access badges provide access to all content in the Program (Tech Tours still require registration)

View the Exhibit Floor Plan.
arrow_back View All Dates
Wednesday, October 9
 

9:30am EDT

Acoustic modeling and designing of emergency sound systems in road tunnels
Wednesday October 9, 2024 9:30am - 9:50am EDT
Road tunnels are subject to strong regulations and high life-safety standards. Requirements importantly include the speech intelligibility of the emergency sound system. However, a road tunnel is an acoustically challenging environment due to extreme reverberation, high noise floors, and its tube-like geometry. Designing an adequate sound system is a challenge and requires extensive acoustic simulation. This article summarizes recent design work on several major road tunnel projects and gives practical guidelines for the successful completion of similar projects. The project includes several tunnels, each of several kilometers’ length with one to five lanes, transitions and sections, having a total length of 33 km. For each tunnel, first a working acoustic model had to be developed before the sound system itself could be designed and optimized. On-site measurements were conducted to establish data for background noise including jet fans and various traffic situations. Critical environmental parameters were measured and reverberation times were recorded using large balloon bursts. Sprayed concrete, road surface, as well as other finishes were modeled or estimated based on publicly available data for absorption and scattering characteristics. To establish the geometrical model, each tunnel was subdivided into manageable segments of roughly 1-2 km in length based on theoretical considerations. After calibrating the model, the sound system was designed as a large number of loudspeaker lines evenly distributed along the tunnel. Level and delay alignments as well as filter adjustments were applied to achieve the required average STI of 0.48. Validation by measurement showed good correlation with modeling results.
Moderators
avatar for Ying-Ying Zhang

Ying-Ying Zhang

PhD Candidate, McGill University
YIng-Ying Zhang is a music technology researcher and sound engineer. She is currently a PhD candidate at McGill University in the Sound Recording program where her research focuses on musician-centered virtual acoustic applications in recording environments. She received her Masters... Read More →
Speakers
avatar for Stefan Feistel

Stefan Feistel

Managing Director/Partner/Co-Founder, AFMG
Stefan Feistel is Managing Director/Partner/Co-Founder of AFMG, Berlin Germany. He is an expert in acoustical simulation and calculation techniques and applications.
Authors
avatar for Stefan Feistel

Stefan Feistel

Managing Director/Partner/Co-Founder, AFMG
Stefan Feistel is Managing Director/Partner/Co-Founder of AFMG, Berlin Germany. He is an expert in acoustical simulation and calculation techniques and applications.
avatar for Tim Kuschel

Tim Kuschel

Acoustic Consultant, GUZ BOX design + audio
Experienced Acoustic Consultant with a demonstrated history of working in the architecture & planning industry. Skilled in architectural documentation, audio system design, acoustics and extensive experience using AFMG's acoustic modelling software EASE, Tim provides professional... Read More →
Wednesday October 9, 2024 9:30am - 9:50am EDT
1E03

9:50am EDT

Sound immission modeling of open-air sound systems
Wednesday October 9, 2024 9:50am - 10:10am EDT
Noise emission into the neighborhood is often a major concern when designing the configuration of an open-air sound system. In order for events to be approved, advance studies have to show that expected immission levels comply with given regulations and requirements of local authorities. For this purpose, certified engineering offices use dedicated software tools for modeling environmental noise propagation. However, predicting the radiation of modern sound systems is different from classical noise sources, such as trains or industrial plants. Sound systems and their directional patterns are modeled in electro-acoustic simulation software that can be fairly precise but that typically does not address environmental issues.
This paper proposes to use a simple data exchange format that can act as an open interface between sound system modeling tools and noise immission software. It is shown that most immission studies are conducted at points in the far field of the sound system. Far-field directivity data for the sound system is therefore a suitable solution if it is accompanied by a corresponding absolute level reference. The proposed approach has not only the advantage of being accurate for the given application but also involves low computational costs and is fully compliant with the existing framework of outdoor noise modeling standards. Concerns related to documentation and to the protection of proprietary signal processing settings are resolved as well. The proposed approach was validated by measurements at a number of outdoor concerts. Results are shown to be practically accurate within the given limits of uncertainty.
Moderators
avatar for Ying-Ying Zhang

Ying-Ying Zhang

PhD Candidate, McGill University
YIng-Ying Zhang is a music technology researcher and sound engineer. She is currently a PhD candidate at McGill University in the Sound Recording program where her research focuses on musician-centered virtual acoustic applications in recording environments. She received her Masters... Read More →
Speakers
avatar for Stefan Feistel

Stefan Feistel

Managing Director/Partner/Co-Founder, AFMG
Stefan Feistel is Managing Director/Partner/Co-Founder of AFMG, Berlin Germany. He is an expert in acoustical simulation and calculation techniques and applications.
Authors
avatar for Stefan Feistel

Stefan Feistel

Managing Director/Partner/Co-Founder, AFMG
Stefan Feistel is Managing Director/Partner/Co-Founder of AFMG, Berlin Germany. He is an expert in acoustical simulation and calculation techniques and applications.
Wednesday October 9, 2024 9:50am - 10:10am EDT
1E03

10:10am EDT

Virtual Acoustics Technology in the Recording Studio-A System Update
Wednesday October 9, 2024 10:10am - 10:30am EDT
This paper describes ongoing efforts toward optimizing the Virtual Acoustics Technology (VAT) system installed in the Immersive Media Lab at McGill University. Following the integration of the CAVIAR cancelling auralizer for feedback suppression, this current iteration of the active acoustics system is able to flexibly support the creation of virtual environments via the convolution of Spatial Room Impulse Responses (SRIRs) with real-time microphone signals. While the system has been successfully used for both recordings and live performances, we have nevertheless been looking to improve upon its “stability from feedback” and “natural sound quality,” two significant attributes of active acoustics systems [1]. We have implemented new software controls and microphone input methods to increase our ratio of gain before feedback, while additionally repositioning and adding loudspeakers to the system to generate a more even room coverage. Following these additions, we continue to evaluate the space through objective measurements and feedback from musicians and listeners.
[1] M. A. Poletti, “Active Acoustic Systems for the Control of Room Acoustics,” in Proceedings of the International Symposium on Room Acoustics, Melbourne, Australia, Aug. 2010.
Moderators
avatar for Ying-Ying Zhang

Ying-Ying Zhang

PhD Candidate, McGill University
YIng-Ying Zhang is a music technology researcher and sound engineer. She is currently a PhD candidate at McGill University in the Sound Recording program where her research focuses on musician-centered virtual acoustic applications in recording environments. She received her Masters... Read More →
Speakers
avatar for Kathleen Ying-Ying Zhang

Kathleen Ying-Ying Zhang

PhD Candidate, McGill University
YIng-Ying Zhang is a music technology researcher and sound engineer. She is currently a PhD candidate at McGill University in the Sound Recording program where her research focuses on musician-centered virtual acoustic applications in recording environments. She received her Masters... Read More →
Authors
avatar for Kathleen Ying-Ying Zhang

Kathleen Ying-Ying Zhang

PhD Candidate, McGill University
YIng-Ying Zhang is a music technology researcher and sound engineer. She is currently a PhD candidate at McGill University in the Sound Recording program where her research focuses on musician-centered virtual acoustic applications in recording environments. She received her Masters... Read More →
avatar for Mihai-Vlad Baran

Mihai-Vlad Baran

McGill University
avatar for Richard King

Richard King

Professor, McGill University
Richard King is an Educator, Researcher, and a Grammy Award winning recording engineer. Richard has garnered Grammy Awards in various fields including Best Engineered Album in both the Classical and Non-Classical categories. Richard is an Associate Professor at the Schulich School... Read More →
Wednesday October 9, 2024 10:10am - 10:30am EDT
1E03

10:30am EDT

A General Overview of Methods for Generating Room Impulse Responses
Wednesday October 9, 2024 10:30am - 10:50am EDT
The utilization of room impulse responses has proven valuable for both the acoustic assessment of indoor environments and music production. Various techniques have been devised over time to capture these responses. Although algorithmic solutions have been in existence since the 1960s for generating synthetic reverberation in real time, they continue to be computationally demanding and in general lack the accuracy in comparison to measured authentic Room Impulse Responses (RIR). In recent times, machine learning has found application in diverse fields, including acoustics, leading to the development of techniques for generating RIRs. This paper provides a general overview, of approaches and methods for generating RIRs, categorized into algorithmic and machine learning techniques, with a particular emphasis on the latter. Discussion covers the acoustical attributes of rooms relevant to perceptual testing and methodologies for comparing RIRs. An examination of disparities between captured and generated RIRs is included to better delineate the key acoustic properties characterizing a room. The paper is designed to offer a general overview for those interested in RIR generation for music production purposes, with future work considerations also explored.
Moderators
avatar for Ying-Ying Zhang

Ying-Ying Zhang

PhD Candidate, McGill University
YIng-Ying Zhang is a music technology researcher and sound engineer. She is currently a PhD candidate at McGill University in the Sound Recording program where her research focuses on musician-centered virtual acoustic applications in recording environments. She received her Masters... Read More →
Speakers
avatar for Mihai-Vlad Baran

Mihai-Vlad Baran

McGill University
Authors
avatar for Mihai-Vlad Baran

Mihai-Vlad Baran

McGill University
avatar for Richard King

Richard King

Professor, McGill University
Richard King is an Educator, Researcher, and a Grammy Award winning recording engineer. Richard has garnered Grammy Awards in various fields including Best Engineered Album in both the Classical and Non-Classical categories. Richard is an Associate Professor at the Schulich School... Read More →
Wednesday October 9, 2024 10:30am - 10:50am EDT
1E03

11:00am EDT

Reimagining Delay Effects: Integrating Generative AI for Creative Control
Wednesday October 9, 2024 11:00am - 11:20am EDT
This paper presents a novel generative delay effect that utilizes generative AI to create unique variations of a melody with each new echo. Unlike traditional delay effects, where repetitions are identical to the original input, this effect generates variations in pitch and rhythm, enhancing creative possibilities for artists. The significance of this innovation lies in addressing artists' concerns about generative AI potentially replacing their roles. By integrating generative AI into the creative process, artists retain control and collaborate with the technology, rather than being supplanted by it. The paper outlines the processing methodology, which involves training a Long Short-Term Memory (LSTM) neural network on a dataset of publicly available music. The network generates output melodies based on input characteristics, employing a specialized notation language for music. Additionally, the implementation of this machine learning model within a delay plugin's architecture is discussed, focusing on parameters such as buffer length and tail length. The integration of the model into the broader plugin framework highlights the practical aspects of utilizing generative AI in audio effects. The paper also explores the feasibility of deploying this technology on microcontrollers for use in instruments and effects pedals. By leveraging low-power AI libraries, this advanced functionality can be achieved with minimal storage requirements, demonstrating the efficiency and versatility of the approach. Finally, a demonstration of an early version of the generative delay effect will be presented.
Moderators
avatar for Marina Bosi

Marina Bosi

Stanford University
Marina Bosi,  AES Past President, is a founding Director of the Moving Picture, Audio, and Data Coding by Artificial Intelligence (MPAI) and the Chair of the Context-based Audio Enhancement (MPAI-CAE) Development Group and IEEE SA CAE WG.  Dr. Bosi has served the Society as President... Read More →
Speakers Authors
Wednesday October 9, 2024 11:00am - 11:20am EDT
1E03

11:20am EDT

Acoustic Characteristics of Parasaurolophus Crest: Experimental Results from a simplified anatomical model
Wednesday October 9, 2024 11:20am - 11:40am EDT
This study presents a revised acoustic model of the Parasaurolophus crest, incorporating both the main airway and lateral diverticulum, based on previous anatomical models and recent findings. A physical device, as a simplified model of the crest, was constructed using a coupled piping system, and frequency sweeps were conducted to investigate its resonance behavior. Data were collected using a minimally invasive microphone, with a control group consisting of a simple open pipe for comparison. The results show that the frequency response of the experimental model aligns with that of the control pipe at many frequencies, but notable shifts and peak-splitting behavior were observed, suggesting a more active role of the lateral diverticulum in shaping the acoustic response than previously thought. These findings challenge earlier closed-pipe approaches, indicating that complex interactions between the main airway and lateral diverticulum generate additional resonant frequencies absent in the control pipe. The study provides empirical data that offer new insights into the resonance characteristics of the Parasaurolophus crest and contribute to understanding its auditory range, particularly for low-frequency sounds.
Moderators
avatar for Marina Bosi

Marina Bosi

Stanford University
Marina Bosi,  AES Past President, is a founding Director of the Moving Picture, Audio, and Data Coding by Artificial Intelligence (MPAI) and the Chair of the Context-based Audio Enhancement (MPAI-CAE) Development Group and IEEE SA CAE WG.  Dr. Bosi has served the Society as President... Read More →
Speakers Authors
Wednesday October 9, 2024 11:20am - 11:40am EDT
1E03

11:40am EDT

Interpreting user-generated audio from war zones
Wednesday October 9, 2024 11:40am - 12:00pm EDT
Increasingly, civilian inhabitants and combatants in conflict areas use their mobile phones to record video and audio of armed attacks. These user-generated recordings (UGRs) often provide the only source of immediate information about armed conflicts because access by professional journalists is highly restricted. Audio forensic analysis of these UGRs can help document the circumstances and aftermath of war zone incidents, but consumer off-the-shelf recording devices are not designed for battlefield circumstances and sound levels, nor do the battlefield circumstances provide clear, noise-free audio. Moreover, as with any user-generated material that generally does not have a documented chain-of-custody, there are forensic concerns about authenticity, misinformation, and propaganda that must be considered. In this paper we present several case studies of UGRs from armed conflict areas and describe several methods to assess the quality and integrity of the recorded audio. We also include several recommendations for amateurs who make UGRs so that the recorded material is more easily authenticated and corroborated. Audio and video examples are presented.
Moderators
avatar for Marina Bosi

Marina Bosi

Stanford University
Marina Bosi,  AES Past President, is a founding Director of the Moving Picture, Audio, and Data Coding by Artificial Intelligence (MPAI) and the Chair of the Context-based Audio Enhancement (MPAI-CAE) Development Group and IEEE SA CAE WG.  Dr. Bosi has served the Society as President... Read More →
Speakers
avatar for Rob Maher

Rob Maher

Professor, Montana State University
Audio digital signal processing, audio forensics, music analysis and synthesis.
Authors
avatar for Rob Maher

Rob Maher

Professor, Montana State University
Audio digital signal processing, audio forensics, music analysis and synthesis.
Wednesday October 9, 2024 11:40am - 12:00pm EDT
1E03

12:00pm EDT

Experimental analysis of a car loudspeaker model based on imposed vibration velocity: effect of membrane discretization
Wednesday October 9, 2024 12:00pm - 12:20pm EDT
Nowadays, the research about the improvement of the interior sound quality of road vehicles is a relevant task. The cabin is an acoustically challenging environment due to the complex geometry, the different acoustic properties of the materials of cabin components and the presence of audio systems based on multiple loudspeaker units. This paper aims at presenting a simplified modelling approach designed to introduce the boundary condition imposed by a loudspeaker to the cabin system in the context of virtual acoustic analysis. The proposed model is discussed and compared with experimental measurements obtained from a test-case loudspeaker.
Moderators
avatar for Marina Bosi

Marina Bosi

Stanford University
Marina Bosi,  AES Past President, is a founding Director of the Moving Picture, Audio, and Data Coding by Artificial Intelligence (MPAI) and the Chair of the Context-based Audio Enhancement (MPAI-CAE) Development Group and IEEE SA CAE WG.  Dr. Bosi has served the Society as President... Read More →
Speakers Authors
Wednesday October 9, 2024 12:00pm - 12:20pm EDT
1E03

12:20pm EDT

A novel derivative-based approach for the automatic detection of time-reversed audio in the MPAI/IEEE-CAE ARP international standard
Wednesday October 9, 2024 12:20pm - 12:50pm EDT
The Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) Context-based Audio Enhancement (CAE) Audio Recording Preservation (ARP) standard provides the technical specifications for a comprehensive framework for digitizing and preserving analog audio, specifically focusing on documents recorded on open-reel tapes. This paper introduces a novel, envelope derivative-based method incorporated within the ARP standard to detect reverse audio sections during the digitization process. The primary objective of this method is to automatically identify segments of audio recorded in reverse. Leveraging advanced derivative-based signal processing algorithms, the system enhances its capability to detect and reverse such sections, thereby reducing errors during analog-to-digital (A/D) conversion. This feature not only aids in identifying and correcting digitization errors but also improves the efficiency of large-scale audio document digitization projects. The system's performance has been evaluated using a diverse dataset encompassing various musical genres and digitized tapes, demonstrating its effectiveness across different types of audio content.
Moderators
avatar for Marina Bosi

Marina Bosi

Stanford University
Marina Bosi,  AES Past President, is a founding Director of the Moving Picture, Audio, and Data Coding by Artificial Intelligence (MPAI) and the Chair of the Context-based Audio Enhancement (MPAI-CAE) Development Group and IEEE SA CAE WG.  Dr. Bosi has served the Society as President... Read More →
Authors
Wednesday October 9, 2024 12:20pm - 12:50pm EDT
1E03

2:00pm EDT

Auditory Envelopment and Affective Touch Hypothesis
Wednesday October 9, 2024 2:00pm - 2:30pm EDT
Anticipation and pleasure in response to music listening can lead to dopamine release in the human striatal system, a neural midbrain reward and motivation cluster. The sensation of auditory envelopment, however, may also in itself have a stimulating effect. Theoretical reason and circumstantial evidence are given why this could be the case, thereby possibly constituting an auditory complement to a newly discovered and studied percept, affective touch, originating from C-tactile fibres in the skin, stimulated certain ways. In a pilot test, abstract sounds were used to determine the audibility of low frequency inter-aural fluctuation. Naïve subjects aged between 6 and 96 years were all sensitive to the conditions tested and asked to characterize the stimuli in their own words. Based on these results, controlling low frequency inter-aural fluctuation in listeners should be a priority when recording, mixing, distributing and reproducing audio.
Moderators Speakers
avatar for Thomas Lund

Thomas Lund

Senior Technologist, Genelec Oy
Thomas Lund has authored papers on human perception, spatialisation, loudness, sound exposure and true-peak level. He is researcher at Genelec, and convenor of a working group on hearing health under the European Commission. Out of a medical background, Thomas previously served in... Read More →
Authors
avatar for Thomas Lund

Thomas Lund

Senior Technologist, Genelec Oy
Thomas Lund has authored papers on human perception, spatialisation, loudness, sound exposure and true-peak level. He is researcher at Genelec, and convenor of a working group on hearing health under the European Commission. Out of a medical background, Thomas previously served in... Read More →
Wednesday October 9, 2024 2:00pm - 2:30pm EDT
1E03

2:30pm EDT

A framework for high spatial-density auditory data displays using Matlab and Reaper
Wednesday October 9, 2024 2:30pm - 3:00pm EDT
This research aimed to develop a software framework to study and optimize mapping strategies for complex data presented with auditory displays with high spatial resolution, such as wave-field synthesis and higher-order ambisonics systems. Our wave field synthesis system, the Collaborative-Research Augmented Immersive Virtual Environment Laboratory (CRAIVE-Lab), has a 128 full-range loudspeaker system along the circumference of the lab. We decided to use available software music synthesizers because they are built for excellent sound quality, and much knowledge exists on how to program analog synthesizers and other common methods for a desired sound output. At the scale of 128 channels, feeding 128 synthesizers with complex data was not practical for us for initial explorations because of computational resources and the complexity of data flow infrastructure. The proposed framework was programmed in Matlab, using Weather data from the NOAA database for initial exploration. Data is processed from 128 weather stations from East to West in the US spatially aligned with latitude. A MIDI script, in sequential order for all 128 channels, is compiled from converted weather parameters like temperature, precipitation amount, humidity, and wind speed. The MIDI file is then imported into Reaper to render a single sound file using software synthesizers that are operated with the MIDI file control data instructions. The rendered file is automatically cut into the 128 channels in Matlab and reimported into Reaper for audio playback.
Wednesday October 9, 2024 2:30pm - 3:00pm EDT
1E03

3:00pm EDT

Immersive Voice and Audio Services (IVAS) Codec – The New 3GPP Standard for Immersive Communication
Wednesday October 9, 2024 3:00pm - 3:30pm EDT
The recently standardized 3GPP codec for Immersive Voice and Audio Services (IVAS) is the first fully immersive communication codec designed for 5G mobile systems. The IVAS codec is an extension of the mono 3GPP EVS codec and offers additional support for coding and rendering of stereo, multi-channel, scene-based audio (Ambisonics), objects and metadata-assisted spatial audio. The IVAS codec enables completely new service scenarios with interactive stereo and immersive audio in communication, content sharing and distribution. This paper provides an overview of the underlying architecture and new audio coding and rendering technologies. Listening test results show the performance of the new codec in terms of compression efficiency and audio quality.
Moderators Speakers Authors
avatar for Adriana Vasilache

Adriana Vasilache

Nokia Technologies
avatar for Andrea Genovese

Andrea Genovese

Research, Qualcomm
Andrea Genovese is a Senior Researcher Engineer at Qualcomm Technologies Inc. working in Multimedia R&D. Andrea specializes in spatial audio and psychoacoustics, acoustic simulations, networked immersive distributed audio, and signal processing for environmental awareness. In 2023... Read More →
Wednesday October 9, 2024 3:00pm - 3:30pm EDT
1E03

3:30pm EDT

Enhancing Spatial Post-Filters through Non-Linear Combinations
Wednesday October 9, 2024 3:30pm - 4:00pm EDT
This paper introduces a method to enhance the spatial selectivity of spatial post-filters estimated with first-order directional signals. The approach involves applying non-linear transformations on two different spatial post-filters and combining them with weights found by convex optimization of the resulting directivity patterns. The estimation of the post-filters is carried out similarly to the Cross Pattern Coherence (CroPaC) algorithm. The performance of the proposed method is evaluated in a two- and three-speaker scenario with different reverberation times and angular distances of the interfering speaker. The signal-to-interference, signal-to-distortion, and signal-to-artifact ratios are used for evaluation. The results show that the proposed method can improve the spatial selectivity of the post-filter estimated with first-order beampatterns. Using first-order patterns only, it even achieves better spatial separation than the original CroPaC post-filter estimated using first- and second-order signals.
Moderators Speakers
avatar for Stefan Wirler

Stefan Wirler

Aalto University
I'm a PhD student at the Aalto Acoustics Labs, in the Group of Ville Pulkki. I started my PhD studies in 2020. I received my Masters degree in Electrical Engineering from the Friedrich-Alexander University Erlangen-Nuremberg (FAU). My Master thesis was condicted at the AudioLabs in... Read More →
Authors
avatar for Stefan Wirler

Stefan Wirler

Aalto University
I'm a PhD student at the Aalto Acoustics Labs, in the Group of Ville Pulkki. I started my PhD studies in 2020. I received my Masters degree in Electrical Engineering from the Friedrich-Alexander University Erlangen-Nuremberg (FAU). My Master thesis was condicted at the AudioLabs in... Read More →
Wednesday October 9, 2024 3:30pm - 4:00pm EDT
1E03

4:00pm EDT

The Impact of Height Microphone Layer Position on Perceived Realism of Organ Recording Reproduction
Wednesday October 9, 2024 4:00pm - 4:30pm EDT
For on-site immersive recordings, height microphones are often placed carefully to avoid a distorted or unrealistic image, with many established immersive microphone arrays placing the height microphones 1.5 m or less above the horizontal layer. However, with an instrument so acoustically symbiotic with its space as the pipe organ, the impact of non-coincident height microphone placement has not previously been explored in-depth. Despite this, the pipe organ's radiation characteristics may benefit from non-coincident height microphone placement, providing subjectively improved tone color without sacrificing perceived realism. Subjective listening tests were conducted comparing a pipe organ recording with coincident and non-coincident height microphone positions. The findings of this case study conclude that non-coincident height microphone placement does not significantly impact perceived realism of the immersive organ recording.
Moderators Speakers
avatar for Jessica Luo

Jessica Luo

Graduate Student, New York University
Authors
avatar for Jessica Luo

Jessica Luo

Graduate Student, New York University
avatar for Garrett Treanor

Garrett Treanor

New York University
Wednesday October 9, 2024 4:00pm - 4:30pm EDT
1E03

4:30pm EDT

Spatial Matrix Synthesis
Wednesday October 9, 2024 4:30pm - 5:00pm EDT
Spatial Matrix synthesis is presented in this paper. This modulation synthesis technique creates acoustic velocity fields from acoustic pressure signals by using spatial transformation matrices, thus generating complete sound fields for spatial audio. The analysis presented here focuses on orthogonal rotation matrices in both two and three dimensions and compares the results in each scenario with other sound modulation synthesis methods, including amplitude and frequency modulation. As an alternative method for spatial sound synthesis that exclusively modifies the acoustic velocity vector through effects comparable to those created by both amplitude and frequency modulations, Spatial Matrix synthesis is argued to generate inherently spatial sounds, giving this method the potential to become a new musical instrument for spatial music.
Moderators Speakers
avatar for Timothy Schmele

Timothy Schmele

Researcher, Eurecat
Researcher in audio, audio technology, immersive audio, sonification and composition practices. Composer of electroacoustic music.
Authors
avatar for Timothy Schmele

Timothy Schmele

Researcher, Eurecat
Researcher in audio, audio technology, immersive audio, sonification and composition practices. Composer of electroacoustic music.
Wednesday October 9, 2024 4:30pm - 5:00pm EDT
1E03
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -