Loading…
AES Show 2024 NY has ended
Exhibits+ badges provide access to the ADAM Audio Immersive Room, the Genelec Immersive Room, Tech Tours, and the presentations on the Main Stage.

All Access badges provide access to all content in the Program (Tech Tours still require registration)

View the Exhibit Floor Plan.
arrow_back View All Dates
Thursday, October 10
 

9:00am EDT

Towards AI-augmented Live Music Performances
Thursday October 10, 2024 9:00am - 10:00am EDT
There is a renewed need to study the development of AI systems that the artistic community can embrace as empowering tools rather than replacements for human creativity. Previous research suggests that novel AI technologies can greatly help artists expand the range of their musical expressivity and display new kinds of virtuosity during live music performances[1,2]. However, the advent of powerful AI systems, in particular generative models, has also attracted criticism and sparked concern among artists who fear for their artistic integrity and financial viability[3,4]. This is further exacerbated by a widening gap in technological innovation between private companies and research-focused academic institutions.

In this context, we need to pay specific attention to topics such as artistic consent, data collection[5], as well as audience approval. Furthermore, we deeply believe in the importance of integrating lighting and visual arts to effectively convey the role and impact of AI-generated content in human-AI performances to broader audiences.

This workshop will bring together researchers and professionals from music, lighting, visual arts, and artificial intelligence to explore the latest advancements in AI technologies and their transformative potential for live music performances. In particular, discussions will touch on the controllability requirements of AI-augmented instruments, the associated visualization methods, and the sociocultural impact of these technologies[6]. We will focus on the limitations of such technologies, as well as ethical considerations. As an example, we will also discuss the outcomes of the innovative Human-AI co-created concert that we produced with Jordan Rudess on September, 21st, 2024.

References:
[1] Blanchard, Lancelot, Naseck, Perry, Egozy, Eran, and Paradiso, Joe. “Developing Symbiotic Virtuosity: AI-augmented Musical Instruments and Their Use in Live Music Performances.” MIT Press, 2024.
[2] Martelloni, Andrea, McPherson, Andrew P, and Barthet, Mathieu. “Real-time Percussive Technique Recognition and Embedding Learning for the Acoustic Guitar.” arXiv, 2023.
[3] Morreale, Fabio. “Where does the buck stop? Ethical and political issues with AI in music creation.” Transactions of the International Society for Music Information Retrieval, 2021.
[4] Rys, Dan. “Billie Eilish, Pearl Jam, Nicki Minaj Among 200 Artists Calling for Responsible AI Music Practices.” Billboard, April 2, 2024.
[5] Morreale, Fabio, Sharma, Megha, and Wei, I-Chieh. “Data Collection in Music Generation Training Sets: A Critical Analysis.” International Society for Music Information Retrieval, 2023.
[6] Born, Georgina, Morris, Jeremy, Diaz, Fernando, and Anderson, Ashton. “Artificial intelligence, music recommendation, and the curation of culture". White paper: University of Toronto, Schwartz Reisman Institute for Technology and Society, CIFAR AI & Society program, 2021.
Speakers
avatar for Lancelot Blanchard

Lancelot Blanchard

Research Assistant, MIT Media Lab
Musician, Engineer, and AI Researcher. Working at the intersection of Generative AI and musical instruments for live music performances.
avatar for Perry Naseck

Perry Naseck

Research Assistant, MIT Media Lab
Artist and engineer working in interactive, kinetic, light- and time-based media. Specialization in interaction, orchestration, and animation of systems of sensors and actuators.
avatar for Jordan Rudess

Jordan Rudess

Keyboardist, Dream Theater
Voted “Best Keyboardist of All Time” by Music Radar Magazine, Jordan Rudess is best known as the keyboardist/multi-instrumentalist extraordinaire for platinum-selling Grammy Award–winning rock band, Dream Theater. A classical prodigy who began his studies at the Juilliard School... Read More →
PS

Pedro Sarmento

PhD candidate, Queen Mary University of London
avatar for Eran Egozy

Eran Egozy

Professor of the Practice, Music Technology, Massachusetts Institute of Technology
Thursday October 10, 2024 9:00am - 10:00am EDT
1E16

10:15am EDT

Putting a 1970s recording studio on stage, Sound Design for Stereophonic.
Thursday October 10, 2024 10:15am - 11:15am EDT
Come join the team from the Tony award winning stage play Stereophonic at AES NY 2024, to learn what it takes to put a 1970s recording studio on stage!
Speakers
avatar for John McKenna

John McKenna

sndwrks LLC
Professionally, John (he/him) practices sound design for Broadway plays and musicals in addition to software engineering. When not busy with work, John designs and builds innovative furniture, creates purpose-designed products using 3D printing, and enjoys playing with his cat, Nori.He... Read More →
RR

Ryan Rumery

Ryan Rumery
Thursday October 10, 2024 10:15am - 11:15am EDT
1E16

11:30am EDT

Expressive Control in Electronic Instruments
Thursday October 10, 2024 11:30am - 1:00pm EDT
One of the considerations in electronic instrument design has always been expressive control, how to give the player a deeper more intuitive connection with an instrument's sound engine. MIDI Polyphonic Expression (MPE) was adopted by the MIDI Manufacturers Association in 2018 as an enhancement to the original specification. Since then manufacturer support has grown and several new instruments incorporate this type of gestural control. This panel will examine the creative possibilities of MPE from a sound design persecutive, exploring strategies for three dimensional control of sound parameters, their practical ranges and opportunities for reinventing existing instrument categories with continuous pitch control as well as more complex timbral effects.
Speakers
avatar for Michael Bierylo

Michael Bierylo

Chair Emeritus, Electronic Production and Design, Berklee College of Music
Chair Emeritus, Electronic Production and Design
avatar for Jesse Terry

Jesse Terry

Head of Hardware, Ableton Inc
Jesse Terry is the Head of Hardware at Ableton, and leads the team designing and manufacturing Ableton Push. He joined Ableton in 2005 working in artist and partner relations, which lead him to help design the first dedicated Ableton controller, the APC40 (made in collaboration with... Read More →
avatar for Richard Graham

Richard Graham

Principal, Delta Sound Labs
Richard Graham, Ph.D., is a musician, technologist, educator, and entrepreneur. His academic and practical pursuits encompass computer-assisted music composition and instrumental performance. In 2017, he co-founded Delta Sound Labs, an audio technology venture that has developed and... Read More →
avatar for Pat Scandalis

Pat Scandalis

CTO/CEO, moForte Inc
Pat Scandalis is the CTO/CEO moForte Inc, the creator of GeoShred.  He is also the  Chairman MPE Committee in the MIDI Association and a Visiting Scholar, Stanford/CCRMA.  He holds a BSc in Physics from Cal Poly San Luis Obispo
Thursday October 10, 2024 11:30am - 1:00pm EDT
1E16

2:00pm EDT

AI in Electronic Instrument Design
Thursday October 10, 2024 2:00pm - 3:00pm EDT
As applications of artificial intelligence and machine learning become prevalent across the music technology industry, this panel will examine the ways that these technologies are influencing the design of new electronic instruments. As part of the discussion we’ll looking at current instruments as well and the potential for new designs.
Speakers
avatar for Michael Bierylo

Michael Bierylo

Chair Emeritus, Electronic Production and Design, Berklee College of Music
Chair Emeritus, Electronic Production and Design
avatar for Akito van Troyer

Akito van Troyer

Associate Professor, Berklee College of Music
avatar for Dan Gonzalez

Dan Gonzalez

Principal Product Manager, iZotope & Native Instruments
VZ

Victor Zappi

Northeastern University
Thursday October 10, 2024 2:00pm - 3:00pm EDT
1E16

3:15pm EDT

DEI Town Hall
Thursday October 10, 2024 3:15pm - 4:15pm EDT
Reports from the DEI committee and subcommittees will be given. The floor will then be opening for questions and discussion.
Speakers
avatar for Mary Mazurek

Mary Mazurek

Audio Educator/ Recording Engineer, University of Lethbridge
Audio Educator/ Recording Engineer
avatar for Jiayue Cecilia Wu

Jiayue Cecilia Wu

Assistant Professor, Graduate Program Director (MSRA), University of Colorado Denver
Originally from Beijing, Dr. Jiayue Cecilia Wu (AKA: 武小慈) is a scholar, composer, audio engineer, and multimedia technologist. Her work focuses on how technology can augment the healing power of music. She earned her Bachelor of Science degree in Design and Engineering in 2000... Read More →
Thursday October 10, 2024 3:15pm - 4:15pm EDT
1E16

4:30pm EDT

The medium is the message: Using embedded metadata to integrate systems
Thursday October 10, 2024 4:30pm - 5:30pm EDT
Metadata is essential in audio files for archiving, discovering, and accessing content. But in current multiplatform environments, metadata management can become difficult and complex. One solution is to implement a monolithic, centralised system, but this is often impractical and overly rigid in dynamic production workflows, since different clients or stakeholders may prefer specific delivery systems. As a result, media managers often have to deal with disparate distribution systems such as file-sharing services, e-mail attachments, download URLs, proprietary APIs, etc. The result of many of these distribution processes is that the audio content and its descriptive metadata often become separated, making the content potentially far less usable.

Embedding metadata in your audio files allows for easier, "one prong" delivery of content, in which you can obtain more robustness as the audio travels downstream along your workflows and into your client's systems. Furthermore, embedding metadata can also simplify system integration from a two-way process to a one-way process, where the receiving system does not need to be aware of specific requirements from the transmitting system.

In this workshop we will explore how the New York Public Radio Archives is using existing metadata fields in archival WAVE files to describe, authenticate, and augment their metadata. Using free or very low-cost tools, alongside well established standards, we will use some of the principles behind W3C's Resource Description Framework (RDF) to choose embedded metadata that is robust, consistent, and surprisingly flexible.
Speakers
avatar for Marcos Sueiro Bal

Marcos Sueiro Bal

Archives Manager, New York Public Radio
Marcos Sueiro Bal is the Archives Manager at New York Public Radio. He is a member of the IASA and ARSC Technical Committees. He specializes in audio reformatting and in digital metadata.
Thursday October 10, 2024 4:30pm - 5:30pm EDT
1E16
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -