Loading…
AES Show 2024 NY has ended
Exhibits+ badges provide access to the ADAM Audio Immersive Room, the Genelec Immersive Room, Tech Tours, and the presentations on the Main Stage.

All Access badges provide access to all content in the Program (Tech Tours still require registration)

View the Exhibit Floor Plan.
strong>1E16 [clear filter]
Tuesday, October 8
 

9:00am EDT

Personalized Spatial Audio for Accessibility in Xbox Gaming
Tuesday October 8, 2024 9:00am - 10:00am EDT
In this enlightening panel, technologists from the Xbox platform and creatives from Xbox studios will join us to discuss how they are driving audio innovation and game sound design towards the vision of gaming for everyone. The discussion will focus on how personalized spatial audio can foster inclusivity by accommodating the unique auditory profiles of different ethnicities, genders, and age groups of gamers. By integrating these personalized Head-Related Transfer Functions (HRTFs) into audio middleware, we aim to enhance the Xbox gaming experience for all gamers. This approach not only enriches the auditory landscape but also breaks down barriers, making immersive gaming a truly inclusive experience. Join us as we explore the future of spatial audio on Xbox, where every gamer is heard and can fully engage with the immersive worlds we create.
Speakers
KS

Kaushik Sunder

VP of Engineering, Embody
avatar for Kapil Jain

Kapil Jain

CEO, Embody
We are Embody, and we believe in the power of AI to push the boundaries of immersive sound experience in gaming and entertainment.We are a team of data scientists, audio engineers, musicians and gamers who are building an ecosystem of tools and technologies for Immersive entertainment... Read More →
avatar for Robert Ridihalgh

Robert Ridihalgh

Technical Audio Specialist, Microsoft
A 33-year veteran of the games industry, Robert is an audio designer, composer, integrator, voice expert, and programmer with a passion for future audio technologies and audio accessibility.
Tuesday October 8, 2024 9:00am - 10:00am EDT
1E16

10:15am EDT

The Anatomy of a Recording Session: Where Audio Technology and Musical Creativity Intersect (Part III)
Tuesday October 8, 2024 10:15am - 11:45am EDT
Abstract: Humans and machines interact to create new and interesting music content. Here we look at video excerpts from a particular recording session where the behind-the-scenes action comes to the forefront. Artists working with each other, artists working with the producer and engineer, and the influence (good or bad) of the technology with which they work will all be discussed during the workshop.

Summary:
The workshop will center on a discussion of using recording studio sessions to study creativity as a collaborative, but often complex and subtle practice. Our interdisciplinary team of recording engineers/producers and musicologists aims to understand how the interactions of musicians, engineers, recording technology, and musical instruments shape a recording’s outcome. Statements by participant-observers and the analysis of video footage from recording sessions will provide the starting point for discussions. In addition to first-hand recollections by members of our team, we are interviewing musicians who participated in the sessions. The workshop will focus on both musical interactions and on the interpersonal dynamics that affect the flow of various contributions and ideas during the recording process. Technology used also plays a role and will be analyzed as part of the workshop. The first workshop of this kind was a huge success in Helsinki at the 154th AES Convention. The room was packed, and we had an engaging discussion between panelists and audience members. Part II took place last week in Madrid (156th Convention), with many attendees saying it was “one of the highlights of the convention”. The workshop room was quite full, even though it was in the last timeslot of the last day. For this third workshop we plan on looking under the hood of a recording session involving a funk band with full horn section, and three lead singers. We plan to dig deeply into analyzing underlying events that percolate over time; the “quiet voices” that subtly influence the outcome of a recording session.
Along with our regular team of experts we are very excited to invite a guest panelist this time around – a venerable expert in music production, collaboration, and perception, Dr. Susan Rogers. This workshop was also proposed for AES 155th New York last fall but was declined due to lack of presentation space. Our requirements are simply a PowerPoint video presentation with stereo audio playback.
Speakers
avatar for Richard King

Richard King

Professor, McGill University
Richard King is an Educator, Researcher, and a Grammy Award winning recording engineer. Richard has garnered Grammy Awards in various fields including Best Engineered Album in both the Classical and Non-Classical categories. Richard is an Associate Professor at the Schulich School... Read More →
avatar for Lisa Barg

Lisa Barg

Associate Professor, McGill University
Lisa Barg is Associate Professor of Music History and Musicology at the Schulich School of Music at McGill University and Associate Dean of Graduate Studies. She has published articles on race and modernist opera, Duke Ellington, Billy Strayhorn, Melba Liston and Paul Robeson. She... Read More →
avatar for David Brackett

David Brackett

Professor, McGill University
David Brackett is Professor of Music History and Musicology at the Schulich School of Music of McGill University, and Canada Research Chair in Popular Music Studies. His publications include Interpreting Popular Music (2000), The Pop, Rock, and Soul Reader: Histories and Debates... Read More →
avatar for Susan Rogers

Susan Rogers

Professor, Berklee Online
Susan Rogers holds a doctoral degree in experimental psychology from McGill University (2010). Prior to her science career, Susan was a multiplatinum-earning record producer, engineer, mixer and audio technician. She is best known for her work with Prince during his peak creative... Read More →
avatar for George Massenburg

George Massenburg

Associate Professor of Sound Recording, Massenburg Design Works
George Y. Massenburg is a Grammy award-winning recording engineer and inventor. Working principally in Baltimore, Los Angeles, Nashville, and Macon, Georgia, Massenburg is widely known for submitting a paper to the Audio Engineering Society in 1972 regarding the parametric equali... Read More →
Tuesday October 8, 2024 10:15am - 11:45am EDT
1E16

2:00pm EDT

Bridging the Gap: Lessons for Live Media Networking from IT
Tuesday October 8, 2024 2:00pm - 3:00pm EDT
The rapid evolution of live media networking has brought it closer to converged networking, where robust and efficient communication is paramount. While protocols such as MILAN/AVB, Dante and AES67 are staples, significant opportunities exist to enhance live media networking by adopting architectural blueprints, tools, and widely used protocols from the Information Technology (IT) sector. This workshop explores the specific requirements of live media networking, identifies potential learnings from IT workflows, and examines how other industries, particularly broadcast and video markets, have successfully integrated IT principles to propose technical recommendations.
Live media networking, encompassing audio, video, and control signals, demands high precision, low latency, and synchronization. Unlike traditional IT networks, which prioritize data integrity and security, live media networks must ensure seamless real-time transmission without compromising quality. The workshop will delve into these specificities, highlighting the challenges unique to live media and how they differ from typical IT networking scenarios and the use of Time Sensitive Networking (TSN)..
A significant challenge in this transition is the learning curve faced by sound technicians. Traditionally focused on audio-specific knowledge, these professionals now need to acquire IT networking skills to manage complex media networks effectively. This gap in expertise necessitates a new role emerging in the industry: the "Live Media Network Manager," a specialist who bridges the knowledge gap between traditional sound engineering and advanced IT networking.
A key focus area will be examining IT architectural blueprints and their applicability to live media networking. IT networks often leverage scalable, redundant, and resilient architectures to ensure uninterrupted service delivery. By adopting similar principles, live media networks can achieve greater reliability and scalability. The workshop will discuss how concepts such as network segmentation, redundancy, and failover mechanisms from IT can be tailored to meet the stringent requirements of live media.
Additionally, we will explore the tools and protocols widely used in IT that can benefit live media networking. Network monitoring and management tools, such as SNMP and Syslog, offer comprehensive insights into network performance and can aid in proactive maintenance and troubleshooting. Furthermore, protocols like QoS can be adapted to prioritize media traffic, ensuring that critical audio and video streams are delivered with minimal delay and jitter.
The workshop will also draw parallels from the broadcast and video markets, which have already embraced IT-based solutions to enhance their networking capabilities. These industries have developed technical recommendations and standards, such as SMPTE ST 2110 for professional media over managed IP networks, which can serve as valuable references for the live media domain. By examining these examples, participants will gain a broader perspective on how cross-industry learnings can drive innovation in live media networking.
This workshop will provide a comprehensive overview of the specific needs of live media networking and present actionable insights from IT workflows and other industries. Participants will leave with a deeper understanding of how to leverage IT principles to enhance the efficiency, reliability, and scalability of live media networks, paving the way for a more integrated and future-proof approach.
Speakers
avatar for Nicolas Sturmel

Nicolas Sturmel

Directout GmbH
Tuesday October 8, 2024 2:00pm - 3:00pm EDT
1E16

3:15pm EDT

The Devil in the Details
Tuesday October 8, 2024 3:15pm - 4:15pm EDT
What happens when you realize in the middle of a mass digitization project that most of your video assets have multi-track production audio instead of finished mixed audio, and your vendor doesn't offer a service to address the issue? Digitizing the Carlton Pearson Collection for the Harvard Divinity School produced just such a conundrum. This workshop will walk through a case study of the process of identifying problems from vendor work, QC and production workflows that had to be put into place to correct the issues that were surfaced as the project progressed, including a look at the technology stack that was developed internally in response to these issues and the necessary solutions, including a full GUI video editor that was developed for QC of audio and video and for implementing mass top/tail editing of assets while offering individual edit decision points. From problem identification to audio mix to video trimming to close captioning using AI solutions and project deposit to preservation repositories, the project team had only 3 months to complete the work on just shy of 4000 assets.
Speakers
avatar for Kaylie Ackerman

Kaylie Ackerman

Head of Media Preservation, Harvard Library
Tuesday October 8, 2024 3:15pm - 4:15pm EDT
1E16

4:30pm EDT

Psychoacoustics for Immersive Productions
Tuesday October 8, 2024 4:30pm - 5:30pm EDT
3D audio has enormous potential to emotionally touch the audience: The potent effect occurs when the auditory system is given the illusion of being in a natural environment. When this is the case with impressive music, everyone gets goosebumps. Psychoacoustics forms the basis for remarkable results in music productions.
In the first part, Konrad Strauss explains the basics of psychoacoustics in the context of music production:
• How immersive differs from stereo
• Sound localization and perception
• The eye/ear/brain link
• Implications for recording and mixing in immersive
• Transitioning from stereo to immersive: Center speaker, LFE, working with the diffuse
surround field and height channels.
In the second part, Lasse Nipkow introduces the quasi-binaural spot miking technique he uses to capture the beautiful sound of acoustic instruments during his recordings. He explains the strategy for microphone placement and shows, using stereo and 3D audio sound examples, the potential of these signals for immersive productions.
This first contribution is linked to a second, subsequent presentation by Lasse Nipkow and Ronald Prent: ‘Tools for Impressive Immersive Productions’.
Speakers
avatar for Lasse Nipkow

Lasse Nipkow

CEO, Silent Work LLC
Since 2010, Lasse Nipkow has been a renowned keynote speaker in the field of 3D audio music production. His expertise spans from seminars to conferences, both online and offline, and has gained significant popularity. As one of the leading experts in Europe, he provides comprehensive... Read More →
avatar for Konrad Strauss

Konrad Strauss

Professor, Indiana University Jacobs School of Music
Konrad Strauss is a Professor of Music in the Department of Audio Engineering and Sound Production at Indiana University’s Jacobs School of Music. He served as department chair and director of Recording Arts from 2001 to 2022. Prior to joining the faculty of IU, he worked as an... Read More →
Tuesday October 8, 2024 4:30pm - 5:30pm EDT
1E16
 
Wednesday, October 9
 

9:00am EDT

Applications of Artificial Intelligence and Machine Learning in Audio Quality Models, Part II
Wednesday October 9, 2024 9:00am - 10:30am EDT
The aim of this workshop is to provide participants with hands-on expertise in utilizing machine learning for audio quality modeling.

This workshop is a follow-up from the initial edition presented at the 156th AES Convention. The workshop expands on its first edition by including the participation of new panelists working at cutting-edge technology with multimedia industry leaders. Particularly, very recent advancements on deep learning for auditory perception will be presented, as well as a new open audio quality dataset for informing auditory models in audio quality metrics.

Machine learning techniques have been instrumental in understanding audio quality perception, revealing hidden relationships in subject response data to enhance device and algorithm development. Moreover, accurate quality models can be used in AI systems to predict audio quality -- a critical part of customer experience -- in situations where using human subjects is costly (e.g., in day-to-day product development) or impractical such as in audio transmission network monitoring and informing deep learning audio algorithms.

The complex nature of quality perception requires users and developers of these models to possess specific domain knowledge that extends beyond the general machine learning set of skills. This expertise includes experimental design in the domain of subjective audio quality assessment, data collection, data augmentation and filtering, in addition to model design and cross-validation.

The workshop aims to shed light on historical approaches to addressing these challenges by marrying aspects of machine learning and auditory perception knowledge. Furthermore, it will provide insights into state-of-the-art techniques and current related research topics in data-driven quality perception modelling.

The main goal of the workshop is to offer its attendees —- whether users or developers —- the necessary tools to assess the suitability of existing ML-based quality models for their use case.
Speakers
avatar for Pablo Delgado

Pablo Delgado

Fraunhofer IIS
Pablo Delgado is part of the scientific staff of the Advanced Audio Research Group at the Fraunhofer Institute for Integrated Circuits (IIS) in Erlangen, Germany. He specializes in psychoacoustics applied to audio and speech coding, as well as machine learning applications in audio... Read More →
avatar for Jan Skoglund

Jan Skoglund

Google
Jan Skoglund leads a team at Google in San Francisco, CA, developing speech and audio signal processing components for capture, real-time communication, storage, and rendering. These components have been deployed in Google software products such as Meet and hardware products such... Read More →
avatar for Phill Williams

Phill Williams

Audio Algorithms, Netflix
Phill is member of Netflix's Audio Algorithms team, working on all aspects of the audio delivery toolchain to get the best possible sound to everybody who watches Netflix, all over the world, all of the time.Prior to working at Netflix, Phill worked at Dolby Laboratories, as a contributor... Read More →
avatar for Sascha Dick

Sascha Dick

Sascha Dick received his Dipl.-Ing. degree in Information and Communication Technologies from the Friedrich Alexander University (FAU) of Erlangen-Nuremberg, Germany in 2011 with a thesis on an improved psychoacoustic model for spatial audio coding, and joined the Fraunhofer Institute... Read More →
Wednesday October 9, 2024 9:00am - 10:30am EDT
1E16

10:45am EDT

BBC Pop Hub - Moving Europe's largest radio station
Wednesday October 9, 2024 10:45am - 11:30am EDT
BBC Radio 2 is Europe’s largest radio station. In this session Jamie Laundon, Solution Lead at BBC Technology & Media Operations, will guide us through the move into the new, state-of-the-art "Pop Hub" studios on 8th floor of BBC Broadcasting House in London.

The project was delivered to extremely tight timescales and with no impact on our listeners. These studios have been designed as dynamic, collaborative spaces, with the aim to inspire our teams to produce their best work, and to reflect an energy and buzz that will be transmitted to our audiences.

The studios have been designed to be simple to use yet also feature complex workflows that permit studios to used in pairs and give additional assistance to presenters and musicians as required.

In this session you’ll learn about managing design decisions when working on an existing site, and the use of realtime media networks, AES67 and Dante to join infrastructure together. As well as audio equipment selection, we will also cover furniture design and some interesting challenges around accessibility, sight lines and camera angles. We will also cover the tools used to visualise radio, and create well-lit branded spaces that suit any of the BBC’s pop music networks. We will also talk about the collaboration and skill sharing between our audio, technology and operations teams to help deliver against very tight timelines.
Speakers
Wednesday October 9, 2024 10:45am - 11:30am EDT
1E16

11:45am EDT

Fix The Mix: Building a Thriving Career Behind the Board
Wednesday October 9, 2024 11:45am - 12:45pm EDT
Presented by We Are Moving The Needle, this panel brings together industry leaders from A&R, management, and artist relations to discuss building an impactful career behind the board. These experienced leaders will share the inside scoop on strategies to build your profile as a producer and engineer from navigating rates to defining your brand to attracting a manager. Whether you're an aspiring engineer or a seasoned pro, this conversation offers valuable insights, practical tips, and a chance to connect with a vibrant community of music industry leaders.
Speakers
LW

Lysee Webb

Founder, Van Pelt Management
avatar for Lee Dannay

Lee Dannay

VP of A&R, Thirty Tigers
Lee Dannay's 20 plus year career includes VP/A&R positions at Epic and Columbia Records, VP A&R in music publishing at Warner/Chappell Music, and Senior Music Producer at America’s Got Talent. She has signed and developed artists ranging from John Mayer, Shawn Mullins, Five For... Read More →
avatar for Samantha Sichel

Samantha Sichel

Head of Social Product & Digital Innovation, Live Nation
Live Nation Entertainment; Senior Manager of Marketing Solutions/Digital Business Development Samantha Sichel leads Digital Marketing Solutions for Live Nation Entertainment. This includes working with the Live Nation Sales and Account Management teams to conceptualize and budget... Read More →
Wednesday October 9, 2024 11:45am - 12:45pm EDT
1E16

2:00pm EDT

You're a Gear Geek - Just Like Us! Staging Gear in Trade Shows, Print Media and Online
Wednesday October 9, 2024 2:00pm - 3:00pm EDT
Drawing on research to published in our forthcoming book, Gear: Cultures of Audio and Music Technologies (The MIT Press), this workshop focuses on gear in trade show, print media, and online fora. Samantha Bennett and Eliot Bates first introduce 'gear', hardware professional audio recording technologies, and 'gear cultures', those being 'milieux where sociability is centred around audio technology objects’ (Bates and Bennett, 2022). With special focus on Part III of our book, this workshop examines the myriad ways we see gear 'staged' - in events including AES and NAMM, in diverse print media, including the Sweetwater catalogue and Sound on Sound, and online in fora including Gearspace.

So much of the fetishistic and technostalgic qualities associated with gear today exceed the materials and design; and it is precisely these fetish ideologies that become central to gear sociability. So how does gear become social, and where do gear cultures gather? Since the mid 1990s, we have seen technological objects transformed into gear when staged within these trade show / media / online milieux. In these spaces, gear is gassed to the point where it attains fetish status. When staged, gear is sometimes framed in sex and war metaphor (Lakoff and Johnson, 1980), and heavily draws on heritage, canon, and iconicity to amplify and maintain gear discourses. In all these trade show / print media / online fora, we see the erasure of women’s labour in order to maintain hegemonic masculinities that gear cultures rely upon. In this workshop, Samantha and Eliot use a range of examples - from online gear lovers gassing over gear components, to Foxy Stardust selling us a DAW - to show how gear is called upon to do extensive work in structuring social relations, and how these gear-centric social formations produce gear cultures.
Speakers
avatar for Samantha Bennett

Samantha Bennett

The Australian National University
Wednesday October 9, 2024 2:00pm - 3:00pm EDT
1E16

3:15pm EDT

Implementing WHO safe listening standards: Insights from the Serendipity Arts Festival
Wednesday October 9, 2024 3:15pm - 4:15pm EDT
This tutorial presents a detailed examination of the first known large-scale implementation of the WHO Global Standard for Safe Listening Venues and Events at the 2023 Serendipity Arts Festival, India's largest multidisciplinary arts event. The case study highlights the methods used to monitor and manage sound levels, the design and deployment of sound systems, and the provision of personal hearing protection, training and information. The session will delve into the practical challenges encountered, the strategies employed to adhere to the WHO Global Standard, and the outcomes of these efforts. Attendees will gain an understanding of the complexities involved in applying safe listening principles in a live event context and the implications for future large-scale events.
Speakers
avatar for Adam Hill

Adam Hill

Associate Professor of Electroacoustics, University of Derby
Adam Hill is an Associate Professor of Electroacoustics at the University of Derby where he leads the Electro-Acoustics Research Lab (EARLab) and runs the MSc Audio Engineering program. He received a Ph.D. from the University of Essex, an M.Sc. in Acoustics and Music Technology from... Read More →
Wednesday October 9, 2024 3:15pm - 4:15pm EDT
1E16

4:30pm EDT

Ask Me Anything! (About Starting Your Career)
Wednesday October 9, 2024 4:30pm - 5:30pm EDT
Aimed at students, novices, and anyone thinking of transitioning into working in the audio industry as either a professional or hobbyist/amateur. What best prepared our panel of professionals for entry into their career? We will discuss skills, knowledge, work habits, obstacles, working conditions, goals and realities, and whatever else YOU want us to discuss. Join us for a relaxed and informal discussion featuring professionals from different sectors of the industry (including music production, live sound, broadcasting, and research/development, and maybe more!) – and bring your questions – a good portion of the discussion will be guided by audience interests.
Speakers
avatar for Ian Corbett

Ian Corbett

Coordinator & Professor, Audio Engineering & Music Technology, Kansas City Kansas Community College
Dr. Ian Corbett is the Coordinator and Professor of Audio Engineering and Music Technology at Kansas City Kansas Community College. He also owns and operates off-beat-open-hats LLC, providing live sound, recording, and audio production services to clients in the Kansas City area... Read More →
Wednesday October 9, 2024 4:30pm - 5:30pm EDT
1E16
 
Thursday, October 10
 

9:00am EDT

Towards AI-augmented Live Music Performances
Thursday October 10, 2024 9:00am - 10:00am EDT
There is a renewed need to study the development of AI systems that the artistic community can embrace as empowering tools rather than replacements for human creativity. Previous research suggests that novel AI technologies can greatly help artists expand the range of their musical expressivity and display new kinds of virtuosity during live music performances[1,2]. However, the advent of powerful AI systems, in particular generative models, has also attracted criticism and sparked concern among artists who fear for their artistic integrity and financial viability[3,4]. This is further exacerbated by a widening gap in technological innovation between private companies and research-focused academic institutions.

In this context, we need to pay specific attention to topics such as artistic consent, data collection[5], as well as audience approval. Furthermore, we deeply believe in the importance of integrating lighting and visual arts to effectively convey the role and impact of AI-generated content in human-AI performances to broader audiences.

This workshop will bring together researchers and professionals from music, lighting, visual arts, and artificial intelligence to explore the latest advancements in AI technologies and their transformative potential for live music performances. In particular, discussions will touch on the controllability requirements of AI-augmented instruments, the associated visualization methods, and the sociocultural impact of these technologies[6]. We will focus on the limitations of such technologies, as well as ethical considerations. As an example, we will also discuss the outcomes of the innovative Human-AI co-created concert that we produced with Jordan Rudess on September, 21st, 2024.

References:
[1] Blanchard, Lancelot, Naseck, Perry, Egozy, Eran, and Paradiso, Joe. “Developing Symbiotic Virtuosity: AI-augmented Musical Instruments and Their Use in Live Music Performances.” MIT Press, 2024.
[2] Martelloni, Andrea, McPherson, Andrew P, and Barthet, Mathieu. “Real-time Percussive Technique Recognition and Embedding Learning for the Acoustic Guitar.” arXiv, 2023.
[3] Morreale, Fabio. “Where does the buck stop? Ethical and political issues with AI in music creation.” Transactions of the International Society for Music Information Retrieval, 2021.
[4] Rys, Dan. “Billie Eilish, Pearl Jam, Nicki Minaj Among 200 Artists Calling for Responsible AI Music Practices.” Billboard, April 2, 2024.
[5] Morreale, Fabio, Sharma, Megha, and Wei, I-Chieh. “Data Collection in Music Generation Training Sets: A Critical Analysis.” International Society for Music Information Retrieval, 2023.
[6] Born, Georgina, Morris, Jeremy, Diaz, Fernando, and Anderson, Ashton. “Artificial intelligence, music recommendation, and the curation of culture". White paper: University of Toronto, Schwartz Reisman Institute for Technology and Society, CIFAR AI & Society program, 2021.
Speakers
avatar for Lancelot Blanchard

Lancelot Blanchard

Research Assistant, MIT Media Lab
Musician, Engineer, and AI Researcher. Working at the intersection of Generative AI and musical instruments for live music performances.
avatar for Perry Naseck

Perry Naseck

Research Assistant, MIT Media Lab
Artist and engineer working in interactive, kinetic, light- and time-based media. Specialization in interaction, orchestration, and animation of systems of sensors and actuators.
avatar for Jordan Rudess

Jordan Rudess

Keyboardist, Dream Theater
Voted “Best Keyboardist of All Time” by Music Radar Magazine, Jordan Rudess is best known as the keyboardist/multi-instrumentalist extraordinaire for platinum-selling Grammy Award–winning rock band, Dream Theater. A classical prodigy who began his studies at the Juilliard School... Read More →
PS

Pedro Sarmento

PhD candidate, Queen Mary University of London
avatar for Eran Egozy

Eran Egozy

Professor of the Practice, Music Technology, Massachusetts Institute of Technology
Thursday October 10, 2024 9:00am - 10:00am EDT
1E16

10:15am EDT

Putting a 1970s recording studio on stage, Sound Design for Stereophonic.
Thursday October 10, 2024 10:15am - 11:15am EDT
Come join the team from the Tony award winning stage play Stereophonic at AES NY 2024, to learn what it takes to put a 1970s recording studio on stage!
Speakers
avatar for John McKenna

John McKenna

sndwrks LLC
Professionally, John (he/him) practices sound design for Broadway plays and musicals in addition to software engineering. When not busy with work, John designs and builds innovative furniture, creates purpose-designed products using 3D printing, and enjoys playing with his cat, Nori.He... Read More →
RR

Ryan Rumery

Ryan Rumery
Thursday October 10, 2024 10:15am - 11:15am EDT
1E16

11:30am EDT

Expressive Control in Electronic Instruments
Thursday October 10, 2024 11:30am - 1:00pm EDT
One of the considerations in electronic instrument design has always been expressive control, how to give the player a deeper more intuitive connection with an instrument's sound engine. MIDI Polyphonic Expression (MPE) was adopted by the MIDI Manufacturers Association in 2018 as an enhancement to the original specification. Since then manufacturer support has grown and several new instruments incorporate this type of gestural control. This panel will examine the creative possibilities of MPE from a sound design persecutive, exploring strategies for three dimensional control of sound parameters, their practical ranges and opportunities for reinventing existing instrument categories with continuous pitch control as well as more complex timbral effects.
Speakers
avatar for Michael Bierylo

Michael Bierylo

Chair Emeritus, Electronic Production and Design, Berklee College of Music
Chair Emeritus, Electronic Production and Design
avatar for Jesse Terry

Jesse Terry

Head of Hardware, Ableton Inc
Jesse Terry is the Head of Hardware at Ableton, and leads the team designing and manufacturing Ableton Push. He joined Ableton in 2005 working in artist and partner relations, which lead him to help design the first dedicated Ableton controller, the APC40 (made in collaboration with... Read More →
avatar for Richard Graham

Richard Graham

Principal, Delta Sound Labs
Richard Graham, Ph.D., is a musician, technologist, educator, and entrepreneur. His academic and practical pursuits encompass computer-assisted music composition and instrumental performance. In 2017, he co-founded Delta Sound Labs, an audio technology venture that has developed and... Read More →
avatar for Pat Scandalis

Pat Scandalis

CTO/CEO, moForte Inc
Pat Scandalis is the CTO/CEO moForte Inc, the creator of GeoShred.  He is also the  Chairman MPE Committee in the MIDI Association and a Visiting Scholar, Stanford/CCRMA.  He holds a BSc in Physics from Cal Poly San Luis Obispo
Thursday October 10, 2024 11:30am - 1:00pm EDT
1E16

2:00pm EDT

AI in Electronic Instrument Design
Thursday October 10, 2024 2:00pm - 3:00pm EDT
As applications of artificial intelligence and machine learning become prevalent across the music technology industry, this panel will examine the ways that these technologies are influencing the design of new electronic instruments. As part of the discussion we’ll looking at current instruments as well and the potential for new designs.
Speakers
avatar for Michael Bierylo

Michael Bierylo

Chair Emeritus, Electronic Production and Design, Berklee College of Music
Chair Emeritus, Electronic Production and Design
avatar for Akito van Troyer

Akito van Troyer

Associate Professor, Berklee College of Music
avatar for Dan Gonzalez

Dan Gonzalez

Principal Product Manager, iZotope & Native Instruments
VZ

Victor Zappi

Northeastern University
Thursday October 10, 2024 2:00pm - 3:00pm EDT
1E16

3:15pm EDT

DEI Town Hall
Thursday October 10, 2024 3:15pm - 4:15pm EDT
Reports from the DEI committee and subcommittees will be given. The floor will then be opening for questions and discussion.
Speakers
avatar for Mary Mazurek

Mary Mazurek

Audio Educator/ Recording Engineer, University of Lethbridge
Audio Educator/ Recording Engineer
avatar for Jiayue Cecilia Wu

Jiayue Cecilia Wu

Assistant Professor, Graduate Program Director (MSRA), University of Colorado Denver
Originally from Beijing, Dr. Jiayue Cecilia Wu (AKA: 武小慈) is a scholar, composer, audio engineer, and multimedia technologist. Her work focuses on how technology can augment the healing power of music. She earned her Bachelor of Science degree in Design and Engineering in 2000... Read More →
Thursday October 10, 2024 3:15pm - 4:15pm EDT
1E16

4:30pm EDT

The medium is the message: Using embedded metadata to integrate systems
Thursday October 10, 2024 4:30pm - 5:30pm EDT
Metadata is essential in audio files for archiving, discovering, and accessing content. But in current multiplatform environments, metadata management can become difficult and complex. One solution is to implement a monolithic, centralised system, but this is often impractical and overly rigid in dynamic production workflows, since different clients or stakeholders may prefer specific delivery systems. As a result, media managers often have to deal with disparate distribution systems such as file-sharing services, e-mail attachments, download URLs, proprietary APIs, etc. The result of many of these distribution processes is that the audio content and its descriptive metadata often become separated, making the content potentially far less usable.

Embedding metadata in your audio files allows for easier, "one prong" delivery of content, in which you can obtain more robustness as the audio travels downstream along your workflows and into your client's systems. Furthermore, embedding metadata can also simplify system integration from a two-way process to a one-way process, where the receiving system does not need to be aware of specific requirements from the transmitting system.

In this workshop we will explore how the New York Public Radio Archives is using existing metadata fields in archival WAVE files to describe, authenticate, and augment their metadata. Using free or very low-cost tools, alongside well established standards, we will use some of the principles behind W3C's Resource Description Framework (RDF) to choose embedded metadata that is robust, consistent, and surprisingly flexible.
Speakers
avatar for Marcos Sueiro Bal

Marcos Sueiro Bal

Archives Manager, New York Public Radio
Marcos Sueiro Bal is the Archives Manager at New York Public Radio. He is a member of the IASA and ARSC Technical Committees. He specializes in audio reformatting and in digital metadata.
Thursday October 10, 2024 4:30pm - 5:30pm EDT
1E16
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.