Loading…
AES Show 2024 NY has ended
Exhibits+ badges provide access to the ADAM Audio Immersive Room, the Genelec Immersive Room, Tech Tours, and the presentations on the Main Stage.

All Access badges provide access to all content in the Program (Tech Tours still require registration)

View the Exhibit Floor Plan.
Tuesday, October 8
 

9:00am EDT

Student Welcome Meeting
Tuesday October 8, 2024 9:00am - 9:30am EDT
Students! Joins us so we can find out where you are from, and tell you about all the exciting things happening at the convention.
Speakers
avatar for Ian Corbett

Ian Corbett

Coordinator & Professor, Audio Engineering & Music Technology, Kansas City Kansas Community College
Dr. Ian Corbett is the Coordinator and Professor of Audio Engineering and Music Technology at Kansas City Kansas Community College. He also owns and operates off-beat-open-hats LLC, providing live sound, recording, and audio production services to clients in the Kansas City area... Read More →
avatar for Angela Piva

Angela Piva

Angela Piva, Audio Pro/Audio Professor, highly skilled in all aspects of music & audio production, recording, mixing and mastering with over 35 years of professional audio engineering experience and accolades. Known as an innovator in sound technology, and for contributing to the... Read More →
Tuesday October 8, 2024 9:00am - 9:30am EDT
1E08

9:45am EDT

Introducing the 2025 AES AI and ML for Audio Conference and its New Format
Tuesday October 8, 2024 9:45am - 10:30am EDT
The upcoming 2025 AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA) aims to foster a collaborative environment where researchers and practitioners from academia and industry can converge to share their latest work in Artificial Intelligence (AI) and Machine Learning (ML) for Audio.

We want to advertise the upcoming AIMLA at the AES Show, to encourage early involvement and awareness from the AES community. To better accommodate the central themes of the conference, we propose new additions to the typical AES proceedings, such as challenges and long workshops, that can more appropriately showcase the rapidly growing state of the art. In this presentation, we plan to give an overview and a discussion space about the upcoming conference, and the changes we want to bring into play, tailored for AI/ML research communities, with references to successfully organized cases outside of AES. Finally, we propose a standardized template with guidelines for hosting crowdsourced challenges and presenting long workshops.

Challenges are a staple in the ML/AI community, providing a platform where specific problems are tackled by multiple teams who develop and submit models to address the given issue. These events not only spur competition but also encourage collaboration and knowledge sharing, ultimately driving forward the collective understanding and capabilities of the community.

Complementing the challenges, we introduce long-format workshops to exchange knowledge about emerging AI approaches in audio. These workshops can help develop novel approaches from the ground up and produce high-quality material for diffusion among participants. Both additions could help the conference become an exciting and beneficial event at the forefront of AI/ML for audio, as they intend to cultivate a setting where ideas can be exchanged effectively, drawing inspiration from established conferences such as ISMIR, DCASE, and ICASSP, which have successfully fostered AI/ML communities.

As evidenced by the recent AES International Symposium on AI and the Musician, we believe AI and ML will play an increasingly important role in audio and music engineering. To facilitate and standardize the procedures for featuring and conducting challenges and long-form workshops, we will present a complete guideline for hosting long-form workshops and challenges at AES conferences.

Our final goal is to promote the upcoming 2025 International Conference on AI and Machine Learning for Audio, generate a space to discuss the new additions and ideas, connect with interested parties, advertise and provide guidelines regarding the calls for crowd-sourced challenges and workshops, and ultimately get feedback from the AES as a whole to tailor the new conference to the requirements of both our AES and the AI/ML communities.
Speakers
avatar for Soumya Sai Vanka

Soumya Sai Vanka

PhD Researcher, Queen Mary University of London
I am a doctoral researcher at the Centre for Digital MusicQueen Mary University of London under the AI and Music Centre for Doctoral Training Program. My research focuses on the design of user-centric context-aware AI-based tools for music production. As a hobbyist musician and producer myself, I am interested in developing tools that can support creativity and collaboration resulting in emergence and novelty. I am also interested... Read More →
avatar for Franco Caspe

Franco Caspe

Student, Queen Mary University of London
I’m an electronic engineer, a maker, hobbyist musician and a PhD Student at the Artificial Intelligence and Music CDT at Queen Mary University of London. I have experience in development of real-time systems for applications such as communication, neural network inference, and DSP... Read More →
avatar for Brecht De Man

Brecht De Man

Head of Research, PXL University of Applied Sciences and Arts
Brecht is an audio engineer with a broad background comprising research, software development, management and creative practice. He holds a PhD from the Centre for Digital Music at Queen Mary University of London on the topic of intelligent software tools for music production, and... Read More →
Tuesday October 8, 2024 9:45am - 10:30am EDT
1E08

10:45am EDT

Generative AI For Novel Audio Content Creation
Tuesday October 8, 2024 10:45am - 11:45am EDT
The presence and hype associated with generative AI across most forms of recorded media have become undeniable realities. Generative AI tools are becoming increasingly more prevalent, with applications ranging from conversational chatbots to text-to-image generation. More recently, we have witnessed an influx of generative audio models which have the potential of disrupting how music may be created in the very near future. In this talk, we will highlight some of the core technologies that enable novel audio content creation for music production, reviewing some seminal text-to-music works from the past year. We will then delve deeper into common research themes and subsequent works which intend to map these technologies closer to musicians’ needs.


We will begin the talk by outlining a common framework underlying the generative audio models that we will touch on, consisting of an audio synthesizer “back-end” paired with a latent representation modeling “front-end.” Accordingly, we will overview two primary forms of back-ends in the forms of neural audio codecs and variational auto-encoders (with examples), and illustrate how they pair naturally with transformer language model (LM) and latent diffusion model (LDM) front-ends, respectively. Furthermore, we will briefly touch on CLAP and T5 embeddings as conditioning signals that enable text as an input interface, and explain the means by which they are integrated into modern text-to-audio systems.

Next, we will review some seminal works that have been released within the past year(s) (primarily in the field of text-to-music generation), and roughly categorize them according to the common framework that we have built up thus far. At the time of writing this proposal, we would naturally consider MusicLM/FX (LM), MusicGen (LM), Stable Audio (LDM), etc. as exemplary candidates for review. We will contextualize these new capabilities in terms of what they can enable for music production and opportunities for future improvements. Accordingly, we will draw on some subsequent works that intend on meeting musicians a bit closer to the creative process. At the time of writing this proposal, this may include but is not limited to ControlNet (LDM), SingSong (LM), StemGen (LM), VampNet (LM), as well as our own previous work, as approved time permits. We will cap off our talk by providing some perspectives on what AI researchers could stand to understand about music creators, and what musicians could stand to understand about scientific research. Time permitting, we may allow ourselves to conduct a live coding demonstration whereby we exemplify constructing, training, and inferring audio examples from a generative audio model on a toy data example leveraging several prevalent open source libraries.


We hope that such a talk would be both accessible and fruitful for technologists and musicians alike. It would assume no background knowledge in generative modeling, and may perhaps assume only the most notional conception as to how machine learning works. The goal of this talk would be for the audience at large to walk out with a rough understanding of the underlying technologies and challenges associated with novel audio content creation using generative AI.
Tuesday October 8, 2024 10:45am - 11:45am EDT
1E08

2:00pm EDT

Designing With Constraints in an Era of Abundance
Tuesday October 8, 2024 2:00pm - 3:00pm EDT
Every year, the availability and capabilites of processors and sensors expand greatly while their cost decreases correspondingly. As an instrument designer, the temptation to include every possible advance is an obvious one, but one that comes at the cost of additional complexity and probably more concerningly, reduced *character*.

Looking back on electronic instruments from the past, note that what we love about the classics are the quirks and idiosyncracies that come from the technical limitations of the time and how the designers ended up leveraging those limitations creatively. This panel will bring together designers to discuss how we are self-imposing constraints on our designs in the face of that temptation to include everything that's possible.

As Brian Eno said: "Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit - all of these will be cherished and emulated as soon as they can be avoided."
Speakers
avatar for Brett Porter

Brett Porter

Lead Software Engineer, Artiphon
Brett g Porter is a software developer and engineering manager with 3 decades of experience in the pro audio/music instrument industry; currently Lead Software Engineer at Artiphon, he leads the team that develops companion applications for the company's family of instruments. Previously... Read More →
AF

Alexandra Fierra

Eternal Research
AM

Adam McHeffey

CMO, Artiphon
avatar for Ben Neill

Ben Neill

Former Professor, Ramapo College
Composer/performer Ben Neill is the inventor of the mutantrumpet, a hybrid electro-acoustic instrument, and is widely recognized as a musical innovator through his recordings, performances and installations. Neill has recorded ten CDs of his music on the Universal/Verve, Thirsty Ear... Read More →
avatar for Nick Yulman

Nick Yulman

Kickstarter
Nick Yulman has worked with Kickstarter’s community of creators for the last ten years and currently leads the company’s Outreach team, helping designers, technologists, and artists of all kinds bring their ideas to life through crowdfunding. He was previously Kickstarter’s... Read More →
Tuesday October 8, 2024 2:00pm - 3:00pm EDT
1E08

3:15pm EDT

Reporting from the Frontlines of the Recording Industry: Hear from Studio Owners and Chief Engineers on Present / Future Trends, Best Practices, and the State of the Recording Industry
Tuesday October 8, 2024 3:15pm - 4:15pm EDT
Panel Description:
Join us for an in-depth, candid discussion with prominent NYC studio owners on the state recording industry of today and where it might go tomorrow. This panel will explore trends in studio bookings, predictions for the future, and the evolving landscape of recording studios. Our panelists will share their experiences of owning and managing studios, offering valuable insights for engineers on essential skills and best practices for success in today’s recording environment. We'll also discuss the importance of high-quality recording, and how these practices are crucial in maintaining the soul of music against the backdrop of AI and digital homogenization.

Topics also include:
What are studios looking for from aspiring engineers, and what does it take to work in a major market studio?
Studio best practices from deliverables to client relations
To Atmos or not to Atmos?
Proper crediting for engineers and staff
Working with record labels and film studios
Can recording and mix engineers find the same labor protections as the film industry?

Panelists (in alphabetical order)

Amon Drum, Owner / Engineer, The Bridge Studio (https://www.bridgerecordingstudio.com/)
Amon Drum is a recordist, producer, acoustic designer, owner, Chief Engineer, and designer of The Bridge Studio in Williamsburg, Brooklyn. He has an expertise in both analog and digital recording techniques. Amon specializes in the recording of non-western acoustic instruments, large ensembles, and live music for video. Amon is also a percussionist, having trained with master musicians including M’Bemba Bangora and Mamady Keita in Guinea West Africa, and brings this folkloric training to his recording & productions. Working with a wide variety of artists and genres from Jason Moran to Adi Oaisis, Run The Jewels, to Ill Divo as well as many Afro-Cuban ensembles from the diaspora.

Ben Kane, Owner, Electric Garden (https://electricgarden.com/)
Ben Kane is a Grammy Award winning recording and mix engineer, producer, and owner of Electric Garden, a prominent recording studio known for its focus on excellence in technical and engineering standards, and a unique handcrafted ambiance. Kane is known for his work with D'Angelo, Emily King, Chris Dave, and PJ Morton.

Shahzad Ismaily, Owner / Engineer, Figure 8 Recording (https://www.figure8recording.com/)
Shahzad Ismaily is a Grammy-nominated multi-instrumentalist, composer, owner and engineer at Figure 8 Recording in Brooklyn. Renowned for his collaborative spirit and eclectic musical range, Shahzad has contributed his unique sonic vision to projects spanning various genres and artists worldwide.

Zukye Ardella, Partner / Engineer, s5studio (https://www.s5studiony.com/)
Zukye is a New York City based certified gold audio engineer, music producer and commercial studio owner. In 2015, she began her professional career at the original s5studio located in Brooklyn, New York. Zukye teamed up with s5studio’s founder (Sonny Carson) to move s5studio to its current location in Chelsea, Manhattan. Over the years, Ardella has been an avid spokesperson for female empowerment organizations such as Women’s Audio Mission and She is the Music. Talents she’s worked with include NeYo, WizKid, Wale, Conway The Machine, A$AP Ferg, Lil Tecca, AZ, Dave East, Phillip Lawrence, Tay Keith, Lola Brooke, Princess Nokia, Vory, RMR, Yeat, DJ KaySlay, Kyrie Irving, Maliibu Mitch, Flipp Dinero, Fred The Godson, Jerry Wonda, ASAP 12vy and more.


--

Moderator:

Mona Kayhan, Owner, The Bridge Studio (https://www.bridgerecordingstudio.com/)
Mona is a talent manager for Grammy award winning artists, and consults for music and media companies in marketing and operations. She has experience as a tour manager, an international festival producer, and got her start in NYC working at Putumayo World Music. As an owner of The Bridge Studio, Mona focuses on client relationships, studio operations, and strategic partnerships. She also has an insatiable drive to support and advocate for the recording arts industry.
Tuesday October 8, 2024 3:15pm - 4:15pm EDT
1E08

4:30pm EDT

How Do We Embrace AI and Make It Work For Us in Audio? Hey AI, Get Yer Stinkin’ Hands Offa My Job! AES and SMPTE Joint Exploration
Tuesday October 8, 2024 4:30pm - 5:30pm EDT
In this timely panel let’s discuss some of the pressing matters AI confronts us with in audio, and how we can turn a perceived foe into an ally.
We’ll discuss issues including:
During production, AI cannot deal with any artistic issues related to production and engineering, most of which depend on personal interaction - as well as perception.

In post-production, AI could be of use in repetitive tasks: making a track conform to a click-track and maintain proper pitch, perform pitch correction on a track, deal with extraneous clicks (without removing important vocal consonants), perform ambience-matching (particularly on live recordings), to name a few. Can AI running in the background on our DAW build macros for us?

The more we can use it as a tool for creativity and enhance our revenue streams the more it becomes a practical, positive approach. Many composers are using it to create or enhance their musical ideals almost instantaneously. The key here is that it is they are our ideas that AI adds to.

How do we embrace AI, adapt to it, and help it to adapt to us? Can we get to the point where we incorporate it as we have past innovations rather than fear it? How do we take control of AI instead of AI taking control of us?

What should we, the audio community, be asking AI to do?
Speakers
avatar for Gary Gottlieb

Gary Gottlieb

AES President-Elect, Mendocino College
President-Elect, Co-Chair of the Events Coordination Committee, Chair of the Conference Policy Committee, and former Vice President of the Eastern Region, US and Canada; AES Fellow, Engineer, Author, Educator and Guest Speaker Gary Gottlieb refers to himself as a music generalist... Read More →
avatar for Lenise Bent

Lenise Bent

Producer/Engineer/Editor/AES Governor, Soundflo Productions
Audio Recording HistoryWomen and Diversity in AudioAnalog Tape RecordingPost Production/Sound Design/FoleyVinyl RecordsAudio Recording Archiving, Repair and PreservationBasic and Essential Recording TechniquesOpportunities in the Audio IndustryAudio Adventurers
avatar for Franco Caspe

Franco Caspe

Student, Queen Mary University of London
I’m an electronic engineer, a maker, hobbyist musician and a PhD Student at the Artificial Intelligence and Music CDT at Queen Mary University of London. I have experience in development of real-time systems for applications such as communication, neural network inference, and DSP... Read More →
avatar for Soumya Sai Vanka

Soumya Sai Vanka

PhD Researcher, Queen Mary University of London
I am a doctoral researcher at the Centre for Digital MusicQueen Mary University of London under the AI and Music Centre for Doctoral Training Program. My research focuses on the design of user-centric context-aware AI-based tools for music production. As a hobbyist musician and producer myself, I am interested in developing tools that can support creativity and collaboration resulting in emergence and novelty. I am also interested... Read More →
Tuesday October 8, 2024 4:30pm - 5:30pm EDT
1E08
 
Wednesday, October 9
 

9:00am EDT

Sound Design for Multi-Participant Extended Reality Experiences
Wednesday October 9, 2024 9:00am - 10:00am EDT
This workshop explores the unique challenges and opportunities in sound design and technical implementation for multi-participant extended reality (XR) experiences. Participants will learn techniques for creating immersive and interactive spatial audio to enhance presence and facilitate communication and collaboration in multi-participant XR environments. Through demonstrations and an expert-led panel discussion, attendees will gain insights into creating auditory experiences in XR.
Speakers
avatar for Agnieszka Roginska

Agnieszka Roginska

Professor, New York University
Agnieszka Roginska is a Professor of Music Technology at New York University. She conducts research in the simulation and applications of immersive and 3D audio including the capture, analysis and synthesis of auditory environments, auditory displays and applications in augmented... Read More →
avatar for Yi Wu

Yi Wu

PhD Student, New York University
avatar for Parichat Songmuang

Parichat Songmuang

Studio Manager/PhD Student, New York University
Parichat Songmuang graduated from New York University with her Master of Music degree in Music Technology at New York University and Advanced Certificate in Tonmeister Studies. As an undergraduate, she studied for her Bachelor of Science in Electronics Media and Film with a concentration... Read More →
Wednesday October 9, 2024 9:00am - 10:00am EDT
1E08

10:15am EDT

The Soundsystem as Survival and Resistance: A Black, Afro-Asian, and Indigenous Audio History
Wednesday October 9, 2024 10:15am - 11:00am EDT
Short Abstract:
This presentation explores Black, Indigenous, Afro-Asian, and Afro-Indigenous cross-cultural collaboration leading to influential innovations in sonic creative technology. Centering on the soundsystem as a form of self-determination and as a contemporary art form rooted in survival and resistance, the presentation aims to illuminate meaningful connections between ancestral and state-of-the-art sound technologies through the lens of critical decolonization theory. Highlighting innovations ranging from parametric EQ, radio, and the sound system as both cultural and analog technologies, the presentation shows how these could only exist through mutual influence and the building of nations and kinship across cultures bound by mutual experiences of oppression. These artistic traditions in music, gathering, and engineering emerged and continue to act as dynamic tools of survivance within Black and globally Indigenous communities. By skillfully using these tools, practitioners of these electronic and sonic art forms destabilize racialized boundaries and empower the collective ownership of cultural narratives—a fearless pursuit reflecting a shared vision of Black and Indigenous self-determination, liberation, and futurity for kin past, present, and future.

Extension:
The presentation’s audiovisual materials will comprise two aspects: archival materials from music and resistance movements in the featured locations and contemporary artworks dealing with identity politics and protest in connection with music. Featured locations will include, but are not limited to: Indian Country, USA & Canada; Kingston, Jamaica; Notting Hill, UK; and Detroit, MI. Timeline of technological innovation will center audio advancements in amplification and transmission within the post-transistor era (1947 - present), with a historical focus primarily from 1947 to 1996. Featured BIPOC artists and innovators in electrical engineering in the sonic arts will include and are not limited to: Hedley Jones, Tom Wong, and Don Lewis. Featured entrepreneurial artists in audio engineering, radio, recording, music and storytelling will include, but are not limited to: Vy Higgenson, Kemistry & Storm, Jeff Mills, K-Hand, and Patricia Chin. Featured Black, Asian and Indigenous owned radio stations and music labels that function as both mutual aid and community spaces will include, but are not limited to: Bvlbancha Liberation Radio, Cool Runnings Music, KTNN, VP Records, and the Sound System Futures Programme.

The objective of this session is to provide a brief overview mapping sound histories to the BIPOC innovators of origin with the aim of sparking generative discussion on Black and Indigenous creative technologies as both a cultural and computational history valuable and critical to the healing of community and dismantling of hegemonic societal structures. The session is structured largely in response to a critical lack of representation regarding BIPOC engineering history and legacy, and approaches this topic as a celebration of cross-cultural collaboration in a historical paradigm that often centers cross-cultural interactions as conflict-oriented. At the presentation’s conclusion participants will be encouraged to reflect on their own cross-cultural legacies as sources of innovation, as well as reflect on the unactivated dreams they may hold from their ancestors and how they might activate these dreams through sound, collective action, and community gathering.
Speakers
Wednesday October 9, 2024 10:15am - 11:00am EDT
1E08

11:15am EDT

Immersive Voice and Audio Services (IVAS) Codec – A deeper look into the new 3GPP Standard for Immersive Communication
Wednesday October 9, 2024 11:15am - 12:45pm EDT
3GPP, the worldwide partnership project among seven major regional telecommunications standard development organizations responsible for mobile communication standards, has recently completed the standardization of a new codec intended for Immersive Voice and Audio Services (IVAS) in 5G systems. The new IVAS codec is a feature of 3GPP Release 18 (5G-Advanced) and enables completely new service scenarios by providing capabilities for interactive stereo and immersive audio communications, content sharing and distribution. The addressed service scenarios include conversational voice with stereo and immersive telephony/conferencing, XR (VR/AR/MR) communications, i.e., XR telephony and conferencing, and live and non-live streaming of user-generated immersive and XR content.
The IVAS Codec is an extension of the 3GPP EVS codec. Beyond support for mono voice and audio content, IVAS also brings support for coding and rendering of stereo, multichannel-based audio, scene-based audio (Ambisonics), object-based audio and metadata-assisted spatial audio – a new parametric audio format designed for direct spatial audio pick-up from smartphones. The performance of the codec was evaluated in the course of the 3GPP standardization in 23 experiments carried out by two independent laboratories each.
The workshop will introduce the IVAS Codec starting with a brief overview of standardization process and state of the art. The codec architecture, supported audio formats, and main technology blocks behind the supported coding and rendering options will be presented and discussed in detail together with test results and capabilities of the new codec in terms of compression efficiency and audio quality. In addition, the panelists will discuss new service scenarios for immersive audio communication and other mobile use-cases. If possible, immersive audio demonstrations will be carried out during the workshop. The panelists will collect and address questions and other input from the audience.
Speakers
avatar for Adriana Vasilache

Adriana Vasilache

Nokia Technologies
AF

Andrea Felice Genovese

Andrea Genovese is a Senior Research Engineer at Qualcomm
Wednesday October 9, 2024 11:15am - 12:45pm EDT
1E08

2:00pm EDT

Acoustical Simulation and Calculation Techniques for Small Spaces
Wednesday October 9, 2024 2:00pm - 3:00pm EDT
This presentation provides an insightful overview of statistical acoustic calculation techniques and geometrical acoustic modeling. It addresses the unique challenges associated with small to medium-sized spaces and delves into the intricacies of contemporary ray tracing software tools. Attendees will also learn about the latest advancements of using these tools in immersive audio environments. Additionally, the presentation will showcase comparative low-frequency analysis calculations obtained using a variety of tools, including boundary element methods (BEM), finite element
methods (FEM) and others. The session includes a series of real-world case studies to illustrate these concepts in practical applications.
Speakers
avatar for Peter D'Antonio

Peter D'Antonio

Director of Research, REDIacoustics
Dr. Peter D’Antonio is a pioneering sound diffusion expert. He received his B.S. from St. John’s University in 1963 and his Ph.D. from the Polytechnic Institute of Brooklyn, in 1967. During his scientific career as a diffraction physicist, Dr. D’Antonio was a member of the Laboratory... Read More →
avatar for Dirk Noy

Dirk Noy

Managing Director Europe, WSDG
BioDirk Noy has a Master of Science (MSc) Diploma in Experimental Solid State Physics from the University of Basel, Switzerland and is a graduate from Full Sail University, Orlando, USA, where he was one of John Storyk’s students.Since joining the consulting firm Walters-Storyk... Read More →
avatar for Stefan Feistel

Stefan Feistel

Managing Director/Partner/Co-Founder, AFMG
Stefan Feistel is Managing Director/Partner/Co-Founder of AFMG, Berlin Germany. He is an expert in acoustical simulation and calculation techniques and applications.
Wednesday October 9, 2024 2:00pm - 3:00pm EDT
1E08

3:15pm EDT

Remote Live Events for Broadcast-music/spoken word
Wednesday October 9, 2024 3:15pm - 4:15pm EDT
With upwards of fifty remotes each year, NY Public Radio broadcasts a great variety of events from the very simple to large scale events that can involve a many different challenges, including connectivity issues and internet issues with getting audio back to the studios for live broadcasts, dealing with microphone techniques for the greatest orchestras in the world at Carnegie Hall to dealing with the elements with live broadcasts in Central Park or other outdoor locations around NYC, dealing with cooperating with p.a.'s and unions at various sites, to broadcasts at a politician's residence. Each of these kinds of events might entail unique technical and personnel challenges, but there's also a lot of common sense approaches to working with each of these challenges, as well as applying new technologies as they come along to simplify things, and hopefully make things easier over the years to deal with these challenges.

We will discuss in detail two recent events, "This Land" a live broadcast of Classical, jazz, and Americana from Brooklyn Bridge Park and World Orchestra Week (WOW!) a Youth orchestra festival from Carnegie Hall, contrasting indoor vs outdoor and amplified vs unamplified.
Speakers
avatar for Edward Haber

Edward Haber

For 43 years I was an engineer at WNYC, then New York Public Radio (encompassing WNYC, WQXR, and NJPR); and for the last 36 of those years in charge of WNYC and latterly WQXR's remote recording, including recording the best orchestras in the world as part of WQXR's Carnegie Live radio... Read More →
Wednesday October 9, 2024 3:15pm - 4:15pm EDT
1E08

4:30pm EDT

7 Non Traditional Production Methods w/ QuestionATL
Wednesday October 9, 2024 4:30pm - 5:30pm EDT
Some engineers and producers take advantage of a few of the customizations and optimizations of their machines (laptops, desktops, mobile recorders, tablets, etc.) However, unlocking the deeper settings of your production machine can lead to greater efficiency and productivity with cutting-edge applications and technologies. QuestionATL will take attendees through a variety of production optimizations–including mobile devices when working remotely with artists who need to provide high-resolution content without having access to a studio or professional-grade gear.
Speakers
avatar for Question Atl

Question Atl

QuestionATL Music
QuestionATL is a blind Rap Artist & Producer. He is self-taught on several instruments, began freestyling at 5 and making beats at 12. He has won over 20 beat battles and produced for several artists. QuestionATL released his debut project, The Dream, on all streaming platforms as... Read More →
Wednesday October 9, 2024 4:30pm - 5:30pm EDT
1E08

6:00pm EDT

Richard Heyser Memorial Lecture: Doohickii Digitali Redux
Wednesday October 9, 2024 6:00pm - 7:30pm EDT
Anthony Agnello and Richard Factor have over 100 combined years as members of the AES. They have been fortunate to witness and, in some cases, perpetrate the evolution of audio engineering from electro-mechanical to digital. In May of 1979, they published an article in db Magazine, “Doohickii Digitali”, that concluded:

"… we have summarized some of the possibilities of digital signal processing (and completely ignored digital recording, a probably more important subject).  While we don’t have the “all digital” studio yet, it can be reasonably foreseen that before too long, all the components will be available to make one, so that the only analog elements remaining will be the microphone and speaker.  Since these are the primary limitations of the recording process at present, we anxiously await the development of a standard digital I/O bus for the human element."


In this lecture, they will pick up not only where they left off, but also before they began.
Speakers
avatar for Anthony Agnello

Anthony Agnello

Managing Director, Eventide
Tony Agnello was born in Brooklyn, graduated from Brooklyn Technical High School in 1966, received the BSEE from City College of NY in 1971, the MSEE from the City University of NY in 1974 followed by post graduate studies in Digital Signal Processing at Brooklyn’s Polytechnical... Read More →
avatar for Richard Factor

Richard Factor

Chairman / Prime Fossil, Eventide
Richard Factor was born in 1945 and missed being a baby boomer by only weeks.  He lived and was schooled in Manhattan and the Bronx, NYC, until he moved to New Jersey at age 40 and then to Sedona, Arizona in his 60s.  He obtained a degree in Economics but preferred to pursue broadcasting... Read More →
Wednesday October 9, 2024 6:00pm - 7:30pm EDT
1E08
 
Thursday, October 10
 

9:00am EDT

DEI work in Audio Engineering Higher Education
Thursday October 10, 2024 9:00am - 10:30am EDT
This panel introduces DEI pedagogical examples in teaching audio engineering and music technology from the University of Colorado, the University of Lethbridge, Georgia Tech, and the University of Michigan, which incorporate topics that address diversity, equity, inclusion (DEI), and accessibility in the broader field of audio engineering and music technology. These topics include 1) how Open Educational Resources (OER) pedagogy can help bridge the Music Technology Educational Gap and increase accessibility; 2) how to create a safe learning space and build community in the classroom where ideas can be shared and valued; 3) How the Project Studio Music Technology course contributes female-identifying, trans, and non-binary musicians and engineers to electronic music and how this process-based learning method impacts students from diverse backgrounds. 4) how the new course called "Diversity in Music Technology" brings students in new DEI experiences and how the course was structured to include community-building. Through this panel discussion, we strive to improve accessibility, welcoming diverse perspectives, and radiating inclusiveness to all races, genders, and gender identities. Meanwhile, we aim to provide some insights for educators to improve students' mental health and well-being during the post-COVID era.
Speakers
avatar for Jiayue Cecilia Wu

Jiayue Cecilia Wu

Assistant Professor, Graduate Program Director (MSRA), University of Colorado Denver
Originally from Beijing, Dr. Jiayue Cecilia Wu (AKA: 武小慈) is a scholar, composer, audio engineer, and multimedia technologist. Her work focuses on how technology can augment the healing power of music. She earned her Bachelor of Science degree in Design and Engineering in 2000... Read More →
avatar for Mary Mazurek

Mary Mazurek

Audio Educator/ Recording Engineer, University of Lethbridge
Audio Educator/ Recording Engineer
avatar for Alexandria Smith

Alexandria Smith

Assistant Professor, Georgia Institute of Technology
Praised by The New York Times for her “appealingly melancholic sound” and “entertaining array of distortion effects,” Alexandria Smith is a multimedia artist, audio engineer, scholar, trumpeter, and educator who enjoys working at the intersection of all these disciplines... Read More →
avatar for Zeynep Özcan

Zeynep Özcan

Assistant Professor, University of Michigan
Thursday October 10, 2024 9:00am - 10:30am EDT
1E08

10:45am EDT

Take the Blue Pill-Interactive and Adaptive Music In the 21st century
Thursday October 10, 2024 10:45am - 11:30am EDT
Games and music go together like macaroni and cheese. Music is a large part of what makes games fun. So, what are some of the hot button issues that composers folks need to know about game music design and production? Good questions!

This is a must-attend for anyone in the field interested in trying to navigate the waters and steer a path toward more immersive and creative game music. Attendees will gain a more complete understanding of the issues that composers and game designers face when working together in teams. All who attend will gain a better understanding of how great music makes games more fun and increases playability.

This is for anyone involved with game audio -- calling all music composers, game designers, sound designers, animators, audio producers, programmers and technical implementation specialists! It's a must for industry professionals, students, and teachers alike, all who are trying to navigate the waters and steer a path toward deeper creativity and understanding
Speakers
avatar for Steven Horowitz

Steven Horowitz

Executive director of Technology and Applied Composition (TAC) program at SFCM, SFCM
Critically acclaimed as "One of the foremost figures in the field" Composer Steven Horowitz is the executive director of Technology and Applied Composition (TAC) program at SFCM, a position he took over in January, 2024.Horowitz comes to SFCM after 23 years as Audio Director at Nickelodeon... Read More →
Thursday October 10, 2024 10:45am - 11:30am EDT
1E08

11:45am EDT

Protect Your Ass(ets): Disaster Planning and Material Recovery for the Audio Engineer
Thursday October 10, 2024 11:45am - 12:30pm EDT
Climate change is real and it’s here to stay. Floods are more severe because of sea level rise and intensifying storms. Fires are more frequent as droughts get worse. Basically, the world is collapsing. Good news! There are some simple steps you can take to protect your audio assets. This panel of audio preservation experts will cover easy archival strategies for mitigating the effects of natural disasters. We will discuss the best practices for safe storage, whether it’s your historic audio collection or newly-created digital files. The panel will also discuss some techniques for recovering your audio assets if they get damaged in a disaster. We hope you never have to use any of this information, but just in case, it’s best to be prepared!

This presentation will also highlight a case study from the “HBCU Radio Preservation Program.”
Speakers
KF

Karl Fleck

Northeast Document Conservation Center
Karl Fleck is an audio preservation engineer at the Northeast Document Conservation Center (NEDCC) in Andover, Massachusetts. He specializes in the preservation/digitization of audio from magnetic and grooved formats. He presented “Speed and Configuration Changes: a Solution to... Read More →
avatar for Bryce Roe

Bryce Roe

Director of Audio Preservation Services, NEDCC
As the Director of Audio Preservation at NEDCC, Bryce supervises a staff of audio preservation engineers and specialists and manages NEDCC's audio preservation program, which uses both traditional playback and optical-scanning technologies (a.k.a., IRENE) to digitize at-risk audio... Read More →
avatar for Kelly Pribble

Kelly Pribble

Director of Media Recovery Technology, Iron Mountain Entertainment Services (IMES)
Kelly Pribble, Director of Media Recovery Technology at Iron Mountain Entertainment Services (IMES), is a veteran studio engineer, studio builder, archivist and inventor. In March 2022, Kelly was issued a Patent for Media Recovery Technology. Before joining Iron Mountain, Kelly... Read More →
Thursday October 10, 2024 11:45am - 12:30pm EDT
1E08

1:30pm EDT

Enhancing and Optimizing Spatial Audio in Game Development: Best Practices and Innovations
Thursday October 10, 2024 1:30pm - 2:30pm EDT
This panel brings together leading industry experts to explore the critical role of spatial audio in enhancing the gaming experience. As games become more immersive, the demand for high-quality spatial audio has surged, making it an essential component for game developers. The discussion will delve into the intricacies of implementing spatial audio in both middleware and custom sound engines, addressing the challenges and solutions involved in achieving superior audio quality. Panelists will share insights on how to optimize spatial audio for various elements within a game, including sound effects, dialogue, and ambient sounds. Emphasis will be placed on customization techniques that ensure each audio component contributes to a cohesive and realistic auditory environment. A key focus of the panel will be on making spatial audio sound good across different types of games and game engines, highlighting the unique considerations and strategies for each scenario. Attendees will gain valuable knowledge on the latest advancements in spatial audio technology and practical tips for integrating these innovations into their game development workflows. Whether you are an audio engineer, game developer, or sound designer, this panel will provide you with the tools and strategies needed to elevate the audio experience in your games.
Speakers
KS

Kaushik Sunder

VP of Engineering, Embody
KB

Kevin Boettger

Spatial Audio Mastering Engineer, Embody
GD

Gordon Durity

Head of Audio Content, Electronic Arts
Thursday October 10, 2024 1:30pm - 2:30pm EDT
1E08

2:45pm EDT

Get Smart! - Everything you wanted to know about game audio education but were afraid to ask!
Thursday October 10, 2024 2:45pm - 4:15pm EDT
Game Audio education programs have taken root and sprouted up all over the world. Game audio education is a hot topic. What are some of the latest training programs out there? What are the pros and cons of a degree program versus just surfing YouTube? I am already a teacher, how can I start a game audio program at my current school? Good questions! This panel brings together sound artists and entrepreneurs from some of the top private instructional institutions and teachers from some growing programs to discuss the latest and greatest educational models in audio for interactive media. Attendees will get a fantastic overview of what is being offered inside and outside of the traditional education system. This is a must for students and teachers alike, who are trying to navigate the waters and steer a path toward programs that are right for them in the shifting tides of audio for games and interactive media.
Speakers
avatar for Steven Horowitz

Steven Horowitz

Executive director of Technology and Applied Composition (TAC) program at SFCM, SFCM
Critically acclaimed as "One of the foremost figures in the field" Composer Steven Horowitz is the executive director of Technology and Applied Composition (TAC) program at SFCM, a position he took over in January, 2024.Horowitz comes to SFCM after 23 years as Audio Director at Nickelodeon... Read More →
avatar for Dafna Naphtali

Dafna Naphtali

Sound Artist / Composer
Dafna Naphtali is a sound-artist, vocalist, electronic musician and guitarist. As performer and composer of experimental, contemporary classical and improvised music since the mid-90’s, she creates custom Max/MSP programming for sound-processing of voice and other instruments. “luminary... Read More →
avatar for Alistair Hirst

Alistair Hirst

Sr. Game Dev Relations Mgr, Dolby
Alistair Hirst has shipped over 44 games across 11 platforms, doing sound design, music composition, audio programming and integration. He launched the Need for Speed franchise as Audio Director during his 10 years at Electronic Arts.  He co-founded OMNI Audio, a game audio company... Read More →
MD

Michele Darling

Chair, Electronic Production and Design, Berklee College of Music
Thursday October 10, 2024 2:45pm - 4:15pm EDT
1E08

4:30pm EDT

Cross-talk-cancellation for next-generation spatial audio content
Thursday October 10, 2024 4:30pm - 5:30pm EDT
Spatial audio reproduction is predominantly achieved through surround speaker setups or headphones. Surround setups offer high levels of immersion but can be impractical to set up and are unable to reproduce near-field binaural cues, such as whispers.
Conversely, while headphones are convenient, they struggle with sound externalization.

Cross-talk cancellation is a technique for reproducing binaural sounds via speakers, capable of delivering precise 3D audio, including both distant sounds (externalization) and near-field sounds (whispers). Despite its development over five decades ago, its commercial adoption has been slow. However, recent market innovations are enhancing user experiences significantly.

This panel aims to investigate the viability of cross-talk cancellation as an alternative to headphones and surround setups for high-quality binaural 3D audio, particularly in video games. We will engage experts from the gaming industry, spatial audio reproduction sector, and academia to gather insights on the method and its potential applications.
Speakers
avatar for Marcos Simon

Marcos Simon

CTO, Audioscenic
KS

Kaushik Sunder

VP of Engineering, Embody
EC

Edgar Choueiri

Princeton University
avatar for Jago Reed-Jones

Jago Reed-Jones

R&D Engineer, Audioscenic
I am a Research & Development Engineer at Audioscenic, where we are developing the next generation of 3D Audio Technology using binaural audio over loudspeakers.In addition, I am finishing a PhD at Liverpool John Moores University looking at use of neural networks to achieve binaural... Read More →
Thursday October 10, 2024 4:30pm - 5:30pm EDT
1E08
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.