Computational deconstruction of sounds for music composition and performance
This presentation will focus on machine learning methods applied to the discovery and extraction of elementary but perceptually meaningful sonic elements from complex sounds. The presented techniques showcase a relatively seldom explored type of computational tools that analyze sound data and produce a palette of structured sound components (impulses, motifs, rhythmic patterns and so on) that the user can subsequently freely combine, mix and manipulate. Such methods keep the composer, performer or sound designer fully engaged in the creative process, as opposed to other machine learning systems that are fully generative. The presentation will be accompanied by several musical examples.
J.J. Burred is a researcher and developer specialized in music technology. He holds a PhD in Engineering from the Technical University of Berlin and has worked as a researcher at IRCAM-Centre Pompidou (Paris) and Audionamix on topics such as source separation, automatic music analysis, sound synthesis and musical applications of machine learning. He has worked with artists and composers such as Marco Stroppa, Holly Herndon, Mat Dryhurst and Ralph Killhertz, and is the founder of the Paris-based music software studio Anemond. On the musical side, he is a classically-trained pianist and has played with jazz and electronic groups.