Adrian Freed was Research Director of UC Berkeley's Center for New Music and Audio Technologies (CNMAT) until his retirement in 2016.
He has pioneered many new applications of mathematics, electronics and computer science to audio, music and media production tools including the earliest Graphical User Interfaces for digital sound editing, mixing and processing. His recent work is centered around whole body interaction, gesture signal processing, and terrascale integration of data from wearable and built environment sensor/actuator systems employing electrotextiles and other emerging materials.
Adrian Freed's first publication in 1975 was a design for an electronic doorbell with an unusally compact implementation. This wasn't a particularly powerful musical tool, but in those days access to computational machinery of any kind was exceedingly difficult and dedicated electronic circuits were the only route possible to hobbyists. He continued( unpublished) work in his teens on sound synthesizers exploring a wide gamut of hardware technologies available including a polyphonic touch keyboard organ and a pioneering "drum machine" based on switched resistor VCA/VCF, shift register noise generator and static ram sequence storage.
While at the University of New South Wales he developed control software for a digital additive synthesizer, the groupatron, built into the chassis of a Fairlight synthesizer.
After designing a complete digital synthesizer with Vito Asta at Axis Digital in France he was invited to IRCAM by David Wessel in 1982 where he was director of computing systems and invited by Pierre Boulez to be secretary of the Scientific Commitee.
He was early to recognize the importance of temporal constraints in music systems, a theme throughout his work at CNMAT.
He is author of MacMix a pioneering interactive sound editing, processing, and mixing system, commercialized by Studer-Editec. He developed hard disk audio recording technology and audio post-production user interfaces at WaveFrame.
He designed the Reson8, a multiprocessor digital signal processing system for music and audio applications.
He is the architect of CNMAT's Additive Synthesis System (CAST) and is responsible for its UNIX implementation. He has developed the real-time scheduler used in CAST and novel signal processing algorithms for efficient sinusoidal signal synthesis, for which he holds a patent.
He has made numerous contributions to the MAX programming language including designing the lcd andmultislider objects and also the resonances~, sinusoids~ signal processing modules for Max/MSP and OSW.
He has written and lectured on efficient use of the C and C++ programming languages for signal processing applications and was a featured speaker on the subject of integrating sound and computer graphics at SIGGRAPH 1996.
During his sabbatical leave in 2000 he developed new applications of computational linguistics and data mining to Internet applications including an early CMS with semantic tagging and inferencing.
As leader of CNMAT's Guitar and chordophone Innovation Group (GIG), he has focussed on hardware and signal processing software to take advantage of separate processing of each string. His lifelong efforts to improve the guitar are documented in sfweekly. He has worked on harps and augmented cello and developed a stringless cello. In 2009 he was invited to speak on the future of the guitar at a Paris conference on the Identity of the Electric Guitar.
He is recognized for his inspiring, accessible teaching which in the last years has focussed on integrating sensors into new controllers and art installations. Recent workshops have centered around sharing new techniques that employ electrotextiles and other emerging materials.