Loading…
AES Show 2024 NY has ended
Exhibits+ badges provide access to the ADAM Audio Immersive Room, the Genelec Immersive Room, Tech Tours, and the presentations on the Main Stage.

All Access badges provide access to all content in the Program (Tech Tours still require registration)

View the Exhibit Floor Plan.
Applications in Audio clear filter
Wednesday, October 9
 

10:20am EDT

ADM-OSC 1.0
Wednesday October 9, 2024 10:20am - 10:40am EDT
Spatial and immersive audio has become increasingly mainstream, presented in concert halls and more recently through music streaming services. There is a diverse ecosystem of hardware and software controllers and renderers in both live and studio settings that would benefit from a standardized communication protocol. In 2019 a group of industry stakeholders began designing ADM-OSC to fill this need. ADM-OSC is a standard for transmitting metadata for object-based audio by implementing a namespace in parallel with the Audio Definition Model (ADM), a metadata standard developed in the broadcast industry. Open Sound Control (OSC) is a well-established data transport protocol developed for flexible and accurate communication of real-time performance data. By leveraging these open standards, we have created a lightweight specification that can be easily implemented in audio software, plugins, game engines, consoles, and controllers.
ADM-OSC has reached a level of maturity over multiple implementations be ready for an official 1.0 release. This paper will discuss the design of ADM-OSC 1.0 and how it was developed to facilitate interoperability for a range of stakeholders and use cases. The core address space for position data is described, as well as extensions for live control data. We conclude with an overview of future ADM-OSC development, including next steps in bringing together ideas and discussion from multiple industry partners.
Moderators Speakers
avatar for Michael Zbyszynski

Michael Zbyszynski

Software Development Engineer, L-Acoustics
Michael Zbyszyński is musician, researcher, teacher and developer in the field of contemporary electroacoustic music. He is currently part of the Creative Technologies R&D group at L-Acoustics. As a musician, his work spans from brass bands to symphony orchestras, including composition... Read More →
Authors
avatar for Michael Zbyszynski

Michael Zbyszynski

Software Development Engineer, L-Acoustics
Michael Zbyszyński is musician, researcher, teacher and developer in the field of contemporary electroacoustic music. He is currently part of the Creative Technologies R&D group at L-Acoustics. As a musician, his work spans from brass bands to symphony orchestras, including composition... Read More →
avatar for Hugo Larin

Hugo Larin

Senior Mgr. Business Development | FLUX:: GPLM, Harman International
Hugo Larin is a key collaborator to the FLUX: SPAT Revolution project and has deep roots in audio mixing, design and operation, as well as in networked control and data distribution. He leads the FLUX:: business development at HARMAN. His recent involvements and interests include object-based spatial audio mixing workflows, interoperability... Read More →
avatar for Lucas Zwicker

Lucas Zwicker

Senior Director, Workflow and Integration, CTO Office, Lawo AG
Lucas joined Lawo in 2014, having previously worked as a freelancer in the live sound and entertainment industry for several years. He holds a degree in event technology and a Bachelor of Engineering in electrical engineering and information technology from the University of Applied... Read More →
avatar for Mathieu Delquignies

Mathieu Delquignies

Education & Application Support France, d&b audiotechnik
Mathieu holds a Bachelors's degree in applied physics from Paris 7 University and Master's degree in sound engineering from ENS Louis Lumière in 2003. He has years of diverse freelance mixing and system designer experiences internationally, as well as loudspeakers, amplifiers, dsp... Read More →
Wednesday October 9, 2024 10:20am - 10:40am EDT
1E04

10:40am EDT

Analysis of Ultra Wideband Wireless Audio Transceivers for High-Resolution Audio Transmission
Wednesday October 9, 2024 10:40am - 11:00am EDT
Modern Ultra Wide-Band Wireless (UWB) transceiver radio systems enhance digital audio wireless transmission by eliminating the need for audio data compression required by narrowband technologies such as Bluetooth. UWB systems, characterized by their high bandwidth, bypass compression computation delays, enabling audio data transmission from transmitter to receiver in under 10 milliseconds. This paper presents an analytical study of audio signals transmitted using a contemporary UWB transceiver system. The analysis confirms that UWB technology can deliver high-resolution (96kHz/24-bit) audio that is free from artifacts and comparable in quality to a wired digital link. This study underscores the potential of UWB systems to revolutionize wireless audio transmission by maintaining integrity and reducing latency, aligning with the rigorous demands of high-fidelity audio applications.
Moderators Speakers
avatar for Jeff Anderson

Jeff Anderson

SPARK Microsystems
Authors
avatar for Jeff Anderson

Jeff Anderson

SPARK Microsystems
Wednesday October 9, 2024 10:40am - 11:00am EDT
1E04

11:20am EDT

Rediscovering Xenakis: Decoding the Free Stochastic Music Program's Output File
Wednesday October 9, 2024 11:20am - 11:40am EDT
Composer Iannis Xenakis (1922-2001), in his book Formalized Music (1963), includes a piece of Fortran IV computer code which produces a composition of stochastically generated music. This early example of indeterminate music made by a computer was in fulfillment of his goal of creating a musical composition of minimum constraints by letting the computer stochastically generate parameters. Stochasticity is used on all levels of the composition, macro (note density and length of movements) and micro (length of notes and pitches). This paper carefully analyzes the output composition and the variations the program might produce. Efforts are then made to convert these compositions into MIDI format in order to promote their preservation and increase their accessibility. The preservation of the Free Stochastic Music Program is beneficial to understand one of the first examples of indeterminate computer music, with similar techniques being found in modern day music creation software.
Moderators Speakers Authors
Wednesday October 9, 2024 11:20am - 11:40am EDT
1E04

11:40am EDT

Unveiling the Impact of K-Pop Song Lyrics on Adolescent Self-Harm Rates: A Pilot Study of Gender Disparities
Wednesday October 9, 2024 11:40am - 12:00pm EDT
This study investigates the phenomenon of increased emergency room visits by teenagers following the broadcast of ‘High School Rapper 2’, which features self-harm in its lyrics. The rate of emergency room visits for self-harm among adolescents aged 10 to 24 notably increased during and shortly after the program. Specifically, in the 10-14 age group, visits per 100,000 population tripled from 0.9 to 3.1. For those aged 15-19, the rate rose from 5.7 to 10.8, and in the 20-24 age group, it increased from 7.3 to 11.0. This study aims to clarify the relationship between lyrics and self-harm rates among adolescents. We analyzed the lyrics of the top 20 songs performed on Melon, a popular music streaming platform for teenagers, from April to October 2018. Using Python's KoNLPy and Kkma libraries for tokenization, part-of-speech tagging, and stop-word filtering, we identified the top 50 frequently appearing words and narrowed them down to the top 20 contextually significant words. A correlation analysis (Pearson R) with "Emergency Department Visits due to Self-Harm" data revealed that the words 'sway', 'think', 'freedom' and 'today' had a statistically significant correlation (p < 0.05). Additionally, we found that males’ self-harm tendency was less influenced by the broadcast compared to females. This research suggests a computational approach to understanding the influence of music lyrics on adolescent self-harm behavior. This pilot study demonstrated correlations between specific words in K-pop lyrics and increased adolescent self-harm rates, with notable gender differences.
Moderators Speakers Authors
avatar for Sungjoon Kim

Sungjoon Kim

Research Intern, Korea Advanced Institute of Science and Technology
Wednesday October 9, 2024 11:40am - 12:00pm EDT
1E04
 
Thursday, October 10
 

10:00am EDT

Acoustics in Live Sound
Thursday October 10, 2024 10:00am - 12:00pm EDT
Acoustics in Live Sound looks at roadhouses and touring engineers in order to determine how acoustics is applied and whether this is a conscious or subconscious choice. This is done through various sources with a strong emphasis of interviews with professionals in the field. A look at an engineers’ workflow, tools, a cost-benefit analysis of the tools, looking at different types of spaces, ways to improve the spaces, and delays and equalization are combined to give a comprehensive look as to what an engineer is doing and what can make their job easier and mix better. For the beginning engineer, this is a comprehensive guide as to what tools and skills are needed as it relates to acoustics. For the more seasoned professional, it is a different way to think about how problems are being approached within the field and how solutions could be more heavily based on the statistics gathered from acoustics.
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster

10:00am EDT

Design and Training of an Intelligent Switchless Guitar Distortion Pedal
Thursday October 10, 2024 10:00am - 12:00pm EDT
Guitar effects pedals are designed to provide an alteration to a guitar signal through electronic means and are often controlled by a footswitch that routes the signal either through the effect or directly to the output through a 'clean' channel. Because players often switch the effects on and off during different portions of a song and different playing styles, our goal in this paper is to create a trainable guitar effect pedal that classifies the incoming guitar signal into two or more playing style classes and route the signal to the bypass channel or effect channel depending on the class. A training data set is collected that consists of recorded single notes and power chords. The neural network algorithm is able to distinguish between these two playing styles with 95\% accuracy in the test set. An electronic system is designed with a Raspberry Pi Pico, preamplifiers, multiplexers, and a distortion effect that runs a neural network trained using Edge Impulse software that runs the classification and signal routing in real-time.
Speakers
avatar for David Anderson

David Anderson

Assistant Professor, University of Minnesota Duluth
Authors
avatar for David Anderson

David Anderson

Assistant Professor, University of Minnesota Duluth
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster

10:00am EDT

Detection and Perception of Naturalness in Drum Sounds Processed with Dynamic Range Control
Thursday October 10, 2024 10:00am - 12:00pm EDT
This study examines whether trained listeners could identify and judge the “naturalness” of drum samples processed with Dynamic Range Compression (DRC) when paired with unprocessed samples. A two-part experiment was conducted utilizing a paired comparison of a 10-second reference drum loop with no processing and a set of drum loops with varying degrees of DRC. In Part 1, subjects were instructed to identify which sample of the two they believed to have DRC applied. They were then asked to identify which sample they believed sounded “more natural” in Part 2. Out of 18 comparisons for Part 1, only three demonstrated reliable identification of the target variable. Results from Part 2 showed that the subjects perceived DRC processed samples as natural and unprocessed. However, while inconclusive, results here may suggest that listeners perceived the processed drum sound to be equally as natural as the original.
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster

10:00am EDT

Headphones vs. Loudspeakers: Finding the More Effective Monitoring System
Thursday October 10, 2024 10:00am - 12:00pm EDT
There has been some debate about whether headphones or loudspeakers are the more effective monitoring system. However, there has been little discussion of the effectiveness of the two mediums in the context of mixing and music production tasks. The purpose of this study was to examine how the monitoring systems influenced users’ performance in production tasks using both mediums. An experiment was designed, in which the subject was asked to match the boost level of a given frequency band in a reference recording until the boost sounded identical to the reference. A group of six audio engineering graduate students completed the experiment using headphones and loudspeakers. Subjects’ adjustment levels across eight frequency bands were collected. Results suggest that both monitoring systems were equally effective in the context of adjusting equalizer settings. Future research is called for which will include more subjects with diverse levels of expertise in various production-oriented tasks to better understand the potential effect of each playback system.
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster

10:00am EDT

Intentional Audio Engineering Flaws: The Process of Recording Practice Exercises for Mixing Students
Thursday October 10, 2024 10:00am - 12:00pm EDT
There are plenty of resources, both scholarly and otherwise, concerning how to mix audio. Students can find information on the process of mixing and descriptions of the tools used by mixing engineers like level, panning, equalization, compression, reverb, and delay through an online search or more in-depth textbooks by authors like Izhaki [1], Senior [2], Owsinski [3], Case [4, 5], Stavrou [6], Moylan [7,] and Gibson [8]. However, any professional mixing engineer knows that simply reading and understanding such materials is not enough to develop the skills necessary to become an excellent mixer. Much like developing proficiency and expertise on an instrument, understanding the theory and technical knowledge of mixing is not enough to become a skilled mixer. In order to develop exceptional mixing skills, practice is essential. This begs the question; how should an aspiring mixer practice the art and craft of mixing?
The discussion of this topic is absent from a large portion of the literature on mixing except for Merchant’s 2011 and 2013 Audio Engineering Society convention papers in which Colvin’s concept of deliberate practice is applied to teaching students how to mix [9, 10, 11]. In order for students to carry out deliberate practice in the mixing classroom, quality multitrack recordings are necessary that help students focus on using specific mixing tools and techniques. This paper looks at the process of recording a series of mixing exercises with inherent audio engineering challenges for students to overcome.
Speakers Authors
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster

10:00am EDT

Investigation of the impact of pink and white noise in an auditory threshold of detection experiment
Thursday October 10, 2024 10:00am - 12:00pm EDT
Many listening experiments employ so-called "pink" and "white" noise, including those to find the threshold of detection (TOD). However, little research exists that compares the impacts of these two noises on a task. This study is an investigation of the TODs at whole and third octave bands using pink and white noise. A modified up-down method was utilized to determine the TOD in white and pink noise at eight frequency levels using a dB boosted bell filter convolved with the original signal. Six graduate students in audio engineering participated. Subjects were presented with an unfiltered reference signal followed by a filtered or unfiltered signal and then were instructed to respond “same” or “different”. Correct answers decreased the filter boost while incorrect answers resulted in reversals that would increase the filter boost until the subject answered correctly. A trial would conclude when a total of ten reversals occurred. The filter boost levels of the last five reversals were collected to obtain an average threshold value. Results show no significant differences between white and pink noise TOD levels at all frequency intervals. Additionally, the JND between the original and the boosted signal was observed to be ±3 dB, consistent with existing literature. Therefore, it can be suggested that white or pink noise can be equivalently employed in listening tests such as TOD measurements without impact on perceptual performance.
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster

10:00am EDT

New Music Instrument Design & Creative Coding: Assessment of a Sensor-Based Audio and Visual System
Thursday October 10, 2024 10:00am - 12:00pm EDT
Technological developments within the modern age provide near limitless possibilities when it comes to the design of new musical instruments and systems. The aim of the current study is to investigate the design, development, and testing of a unique audio-visual musical instrument. Five participants tested an initial prototype and completed semi-structured interviews providing user feedback on the product. A qualitative analysis of the interviews indicate findings related to two primary areas: enjoyability and functionality. Aspects of the prototype commonly expressed by participants as being enjoyable related to the product’s novelty, simplicity, use of distance sensors, and available interactivity with hardware components. Participant feedback commonly expressed related to improving the prototype’s functionality included adding delay and volume modules, making the product surround sound compatible, and increasing lower register control for certain sensors. These results informed design decisions in a secondary prototype building stage. The current study’s results suggest that designers and developers of new musical instruments should value novel rather than traditional instrument components, hardware rather than software-based interfaces, and simple rather than complex design controls. Ultimately, this paper provides guiding principles, broad themes, and important considerations in the development of enjoyable, new musical instruments and systems.
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster

10:00am EDT

Subjective Evaluation of Emotions in Music Generated by Artificial Intelligence
Thursday October 10, 2024 10:00am - 12:00pm EDT
Artificial Intelligence (AI) models offer consumers a resource that will generate music in a variety of genres and with a range of emotions with only a text prompt. However, emotion is a complex human phenomenon which becomes even more complex when attempted to convey through music. There is limited research assessing AI’s capability to generate music with emotion. Utilizing specified target emotions this study examined the validity of those emotions as expressed in AI-generated musical samples. Seven audio engineering graduate students listened to 144 AI-generated musical examples with sixteen emotions in three genres and reported their impression of the most appropriate emotion for each stimulus. Using Cohen’s kappa minimal agreement was found between subjects and AI. Results suggest that generating music with a specific emotion is still challenging for AI. Additionally, the AI model here appeared to operate with a predetermined group of musical samples linked to similar emotions. Discussion includes how this rapidly changing technology might be better studied in the future.
Thursday October 10, 2024 10:00am - 12:00pm EDT
Poster
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.