Since the advent of electronic music, composers have sought ways to bring out the sounds inside of their heads, and put them on tape. As simple oscillators evolved into modular synthesizers and physical models, we began to come closer to realizing that goal. Now, those hand-crafted signal chains are themselves being superseded by deep neural networks that learn to model and manipulate natural sounds by processing countless hours of recordings.
This talk spans the history of this pursuit, connecting the dots between vastly different approaches to it, from early attempts like Pauline Oliveros' Deep Listening exercises and musique concrète, to the most recent research advances showing how to imitate the voice and likeness of any person. As audio technology comes closer to being able to reproduce and modify arbitrary sound, the composer's toolkit – and maybe even what we consider composition to be – will change drastically. As part of this talk, Gene Kogan will present some of his personal projects and practical resources in the field of machine learning, along with speculations into the possibilities ahead.