Matthew Wright's Publications

Updated 1/7/2008


This is an annotated list of my publications. It's organized by topic in roughly
reverse chronological order. For papers available online, I've made the paper's
title be a link to the online version.(If html and pdf versions are both available,
the title links to html and then after the reference there's a separate link
to the pdf.)


Topics

Rhythm

In my experience, almost all music made with computers falls into one of two
categories:

  • "No rhythm," music that has either no rhythmic events at all
    (e.g., slowly-transforming granular synthesis clouds) or event onset times
    that are so irregular that there is no sensation of meter or pulse
  • "Too much rhythm," music whose rhythm is completely regular and
    metronomically perfect, e.g., most electronic dance music.

I'm interested in the middle ground: music that uses rhythm while allowing
for expressive deviations from strict isochrony.

Wright, Matthew. 2008.
“The Shape of an Instant: Measuring and Modeling Perceptual Attack Time with Probability Density Functions” PhD Dissertation,
Stanford, CA: Center for Computer Research in Music and Acoustics (CCRMA).

Perceptual Attack Time, a musical sound's perceived moment of rhythmic placement,
is not always the same as the moment of the sound's onset. My dissertation research involved a downloadable
listening experiment to measure the PAT of a variety of sounds, and the novel theoretical approach of representing
a sound's PAT as a probability distribution over time rather than as a single instant.

Wright, Matthew. 2007.
“Survey of Models of Musical Rhythm”
(invited talk, abstract only, full paper in print)
2nd Symposium on Music, Rhythm and the Brain,
Stanford, CA: Center for Computer Research in Music and Acoustics (CCRMA).

Wright, Matthew and Edgar Berdahl. 2006. “Towards Machine Learning of Expressive Microtiming in Brazilian Drumming Proceedings of the 2006 International Computer Music Conference, New Orleans, LA, pp. 572-575.

We applied a variety of supervised machine learning algorithms to the problem of predicting a note's displacement from quantized exact metronomic time based on metric position, timbre, and local rhythmic context. The resulting trained models can then apply microtimin to never-before-seen rhythms that is derived from the microtiming in the training data. You can download our code and sound examples.

Wright, Matthew. 2006. “Shifty Looping: meter-aware, non-repeating rhythmic loopsProceedings of the 2006 International Computer Music Conference, New Orleans, LA, p. 44.

A simple new synthesis technique that has all the advantages of traditional looping (producing arbitrary durations of metric material drawn from any source recording and retaining the original expressive timing, timbre, and other features) while avoiding monotony by "shifting" among multiple overlapping loops (without changing the current playback position) in realtime. You can download demo movies and additional information.

Wright, M. and D. Wessel. 1998. “An
Improvisation Environment for Generating Rhythmic Structures Based on North
Indian "Tal" Patterns
Proceedings of the 1998 International
Computer Music Conference
, Ann Arbor, Michigan, pp. 125-128.

A description of an environment for constructing and modifying repeating
rhythmic sequences in real-time.

Iyer, V., J. Bilmes, D. Wessel, and M. Wright. 1997. “A
Novel Representation for Rhythmic Structure
Proceedings of the
1997 International Computer Music Conference
, Thessaloniki, Hellas (Greece),
pp. 97-100.

A description of a representation for rhythmic musical material. In it, music
is made up of events that are located on subdivisions of beats; each event
also has parameters that represent its rhythmic deviation from strict isochrony.
Beats are grouped into structures called cells that may be layered, concatenated,
and repeated.

Computational Ethnomusicology

My current research (at the University of Victoria) involves the application of
signal processing, machine learning, data visualization, and Music Information Retrieval
to the analysis of recordings of music from oral traditions not based on a score (i.e., the
vast majority of the world's music). We've written a position/review paper on the topic:

Tzanetakis, George, Ajay Kapur, W. Andrew Schloss, and Matthew Wright. 2006. “Computational EthnomusicologyJournal of Interdisciplinary Music Studies, 1(2), pp. 1-24.

Sharing and Organizing Software and Other Ideas

CNMAT has made a recent push to share more of the software and other materials we've developed;
this requires structures to organize information so it will be useful.

Schmeder, Andrew, Matthew Wright, Adrian Freed, Edmund Campion, and David Wessel. 2007.
“CNMAT Information Architecture” Proceedings of the 2007 International Computer Music Conference.

Zbyszynski, Michael, Matthew Wright, Edmund Campion. 2007.
“Design and Implementation of CNMAT's Pedagogical Software”
Proceedings of the 2007 International Computer Music Conference.

Intimate Musical Control of Computers

What does it take for a computer-based musical instrument to allow satisfying
control intimacy? This collection of papers lists desired features of such instruments (e.g., low latency, meaningful mappings between gesture
and resulting sound, potential to develop virtuosity) as well as specific techniques
and implementations that my colleagues and I have used to accomplish some of
these goals (e.g., meaningful metaphors for musical control, accurate and precise
sensors, representation of control gestures as high bandwidth sampled signals as well as discrete events).

Wessel, David, Rimas Avizienis, Adrian Freed, and Matthew Wright. 2007.
“A Force Sensitive Multi-touch Array Supporting Multiple 2-D Musical Control Structures.”
Proceedings of the International Conference on New Interfaces for Musical Expression, 41-45.

Zbyszynski, Michael, Matthew Wright, Ali Momeni, and Daniel Cullen. 2007.
“Ten Years of Tablet Musical Interfaces at CNMAT”
Proceedings of the International Conference on New Interfaces for Musical Expression, 100-105.

Freed, Adrian, Rimas Avizienis, and Matthew Wright. 2006. “Beyond 0-5V: Expanding Sensor Integration Architectures.” Proceedings of the International Conference on New Interfaces for Musical Expression, Paris, 97-100.

Some forward thinking about systems architectures to improve bandwidth, latency, and ease of systems integration when building musical instruments with multiple sensors, and a description of how we have implemented these with the CNMAT Connectivity Processor (and /dev/osc). This paper also describes our implementations of many specific sensor-based instruments.



Freed, Adrian, Ahm Lee, John Schott, Frances Marie Uitti, Matthew Wright, and Michael Zbyszynski. 2006 “Comparing Musical Control Structures and Signal Processing Strategies for the Augmented Cello and GuitarProceedings of the

International Computer Music Conference,
New Orleans, 636-642.

Descriptions of hardware and software implementations from two of CNMAT's augmented instrument projects: John Schott's guitar work circa 1999-2001, and Francis-Marie Uitti's work in 2006 adapting much of the same technology (and a lot of new sensors) to her six-string solid body electric cello.

Wright, Matthew, Ryan J. Cassidy, and Michael F. Zbyszynski. 2004. “Audio and Gesture Latency Measurements on Linux and OSX.” Proceedings of the International Computer Music Conference, Miami, FL, pp. 423-429. (pdf)

Since low latency is so necessary for control intimacy, we systematically measured the full latency (from stimulus in to resulting sound out) with a variety of audio hardware interfaces, audio drivers, buffering and related configuration settings, and scheduling modes, discovering some troubling extra latency beyond what one would expect from audio buffer sizes.

Wessel, D. and M. Wright. 2002. “Problems
and Prospects for Intimate Musical Control of Computers
Computer
Music Journal,
26(3): 11-22.

This is a revised and expanded version of the original NIME01 paper and is
the most complete exposition of these ideas.

Wright, M. 2002. “Problems
and Prospects for intimate and satisfying sensor-based control of computer sound

Proceedings of the 2002 Symposium on Sensing and Input for Media-Centric
Systems (SIMS)
, Santa Barbara, CA, pp. 1-6.

This was an invited position paper "about the challenges for future
development" and focuses on what I see as the factors that determine
whether a sensor-controlled, computer-based musical instrument will be intimate,
satisfying, and expressive.

Wessel, D., M. Wright, and J. Schott. 2002. “Intimate
Musical Control of Computers with a Variety of Controllers and Gesture Mapping
Metaphors
Proceedings of the 2002 International Conference on
New Interfaces for Musical Expression (NIME)
, Dublin, Ireland, pp. 171-173.
(pdf)

A brief description of the computer-based instruments that the authors played
during one of the conference's evening concerts.

Wessel, D. and M. Wright. 2001. “Problems
and Prospects for Intimate Musical Control of Computers
Proceedings
of the 2001 ACM Computer-Human Interaction (CHI) Workshop on New Interfaces
for Musical Expression (NIME'01)
, Seattle, WA.

Our original paper on the topic, including the special features of sensor/computer-based
instruments, the goal of a "low entry fee with no ceiling on virtuosity",
some specific technologies such as OpenSound
Control
and CNMAT's connectivity
processor
, and some of our favorite metaphors for musical control.

Wright, M., D. Wessel, and A. Freed. 1997. “New
Musical Control Structures from Standard Gestural Controllers
Proceedings
of the 1997 International Computer Music Conference
, Thessaloniki, Hellas
(Greece), pp. 387-390

I've been using a Wacom digitizing tablet as the main gestural interface for
my interactive computer instruments since 1996. This paper was the first to
suggest adapting the Wacom tablet for musical uses. It also describes some of the earliest mappings and metaphors we used with the tablet.

Performances

Live musical performance is the ultimate goal of most of my work. These papers
are descriptions of particular musical performances, including the context of
each performance, the musical goals, the design and implementation of the interactive
sensor/computer-based musical instruments used for each performance, and critiques
of each result.

Wessel, D., M. Wright, and J. Schott. (2002), “Situated
Trio: An Interactive Live Performance for a Hexaphonic Guitarist and Two Computer
Musicians with Expressive Controllers
Proceedings of the 2002
International Conference on New Interfaces for Musical Expression (NIME)
,
Dublin, Ireland.

A short description of the co-design of instruments, musical material, and
modes of interaction for an improvisational setting involving an electric
guitarist and two performers of sensor/computer-based instruments, including
our musical goals, our view of the need for improvisation, a description of
the instruments.

Madden, T., R. B. Smith, M. Wright, and D. Wessel. 2001.“Preparation
for Interactive Live Computer Performance in Collaboration with a Symphony Orchestra

Proceedings of the 2001 International Computer Music Conference, La Habana,
Cuba.

A description of the design, implementation, and use of an interactive computer-based
instrument in the context of a composition for orchestra and live electronics,
including the composer's musical goals for the electronics and the special
demands of using computer-based instruments in the context of a symphony orchestra.

Wessel, D., M. Wright, and S. A. Khan. 1998. “Preparation
for Improvised Performance in Collaboration with a Khyal Singer
Proceedings
of the 1998 International Computer Music Conference
, Ann Arbor, Michigan,
pp. 497-503.

A description of the preparation and realization of a real-time interactive
improvised performance carried out by two computer-based musicians and a classical
Khyal singer. (Khyal is a genre of classical music from Pakistan and
North India.) A number of technical and musical problems were confronted,
including the problem of cultural distance between the musical genres, specification
and control of pitch material, real-time additive sound synthesis, expressive
control, rhythmic organization, timbral control, and the ability to perform
for a sustained period of time while maintaining an engaging dialog among
the performers.

OpenSound Control (OSC)

OSC is a protocol for communication among computers, sound
synthesizers, and other multimedia devices that is optimized for modern networking
technology. It was developed at CNMAT by Adrian Freed and me, and is now used
extensively throughout the world for both local-area and wide-area networking. (See also my paper(s) under Standards In General.)

Wright, Matthew. 2005. "Open Sound Control: an enabling technology for musical networking." Organised Sound 10, no. 3: 193-200. (Reprint from Cambridge.org)

This paper attempts to be both a polemical position paper as well as a general introduction to OSC and its use for musical networking for the Organised Sound audience. The position I take is based on an influential paper in the HCI field called ‘Beyond being there’ (Hollan andStornetta 1992): communication mediated by network technology will never be as good as face-to-face communication on its own terms, so the task for technology developers should be to leverage specific unique advantages of new communications media.

Wright, M., A. Freed, and A. Momeni. 2003. “OpenSound Control: State of the Art 2003Proceedings of the 2003 International Conference on New Interfaces for Musical Expression (NIME), Montreal, Quebec, Canada.

A high-level overview of OSC's features, a detailed analysis of all known OSC implementations (at the time), a list of known artistic projects using OSC, an argument in favor of OSC as an organizational tool for real-time music software, and many references. I'd recommend this as the best introduction to OSC.

Wright, M. 2002. “OpenSound Control Specification.” Published electronically by CNMAT: http://opensoundcontrol.org/specification

The formal specification for OSC.

Schmeder, Andrew W. and Matthew Wright. 2004. "A Query System for Open Sound Control." Proceedings of the 2004 OpenSoundControl Conference, Berkeley, CA.

A review of the need for queries in OSC and the approaches taken so far, and a proposal for a new system.

Wright, Matthew. 2004. "Brief overview of OSC and its application areas." Proceedings of the 2004 OpenSoundControl Conference, Berkeley, CA.

A very brief introduction to OSC, and a list of the kinds of uses of OSC that I knew about at the time. I'd recommend the NIME03 paper (above) as a better introduction to OSC, and that you look at the current list of OSC's application areas on opensoundcontrol.org.

Wright, M., A. Freed, A. Lee, T. Madden, and A. Momeni. 2001.
“Managing
Complexity with Explicit Mapping of Gestures to Sound Control with OSC

Proceedings of the 2001 International Computer Music Conference, La Habana,
Cuba, pp. 314-317.

A description of the software architecture advantages of using the OSC addressing
scheme, even within applications running on a single machine. In particular,
we advocate representing gestural input as well as sound control with OSC
messages, so that the all-important mapping from gestures to sound control
can take place within the framework of the OSC addressing scheme.

Wright, M. 1998. “Implementation
and Performance Issues with OpenSound Control
Proceedings of the
1998 International Computer Music Conference
, Ann Arbor, Michigan, pp. 224-227.
(pdf)

A description of The OSC
Kit
, a C or C++ library that provides OSC addressability to an application
without degrading reactive real-time performance. This paper provides a high-level
overview of the Kit's interfaces to the rest of an application and how performance
issues are addressed.

Wright, M. and A. Freed. 1997. “Open
Sound Control: A New Protocol for Communicating with Sound Synthesizers

Proceedings of the 1997 International Computer Music Conference, Thessaloniki,
Hellas (Greece), pp. 101-104.

The original description of OSC, including the data representation and semantics,
the URL-style addressing scheme, requirements for a network transport layer
to be able to carry OSC, and the querying mechanism.

Analysis/Synthesis and the Sound Description Interchange
Format (SDIF)

I have long been interested in analysis/synthesis of musical sound, especially
with sinusoidal models. In a nutshell, "analysis" converts recorded
sound into a "sound description" of some type, typically with interesting
properties such as mutability, compactness, or congruence with human perception;
"synthesis" is the production of sound from a (possibly modified)
sound description. Frustrated by the incompatibility of each analysis/synthesis
system's individual file formats for representing analysis results, I helped
Xavier Rodet and Adrian Freed design, implement, and promote the Sound
Description Interchange Format
(SDIF) standard, which is an open-ended format
framework for representing different kinds of sound descriptions including sinusoidal
track models, resonance models, time-domain samples, STFT results, etc. (See also my paper(s) under Standards In General.)

 

Wright, Matthew and Julius O. Smith III. 2005. Open-Source Matlab Tools for Interpolation of SDIF Sinusoidal Synthesis Parameters. Proceedings of the International Computer Music Conference, Barcelona, pp. 632-635.

One major difference among analysis/synthesis systems using sinusoidal models is the way the synthesizer interpolates each partial's parameters (amplitude, frequency, and phase) between the time points where they're specified (by the SDIF file produced by the analysis). So I wrote this matlab software to make it easy to use different interpolation methods to resynthesize sinusoidal models stored in SDIF files. The software is in two parts: the general-purpose part handles the specifics of SDIF and all management of partial births and deaths, etc., while a collection of interpolation-method-specific plugin procedures handle the actual synthesis.

Wright, M., J. Beauchamp, K. Fitz, X. Rodet, A. Röbel,
X. Serra, and G. Wakefield. 2000. “Analysis/Synthesis
Comparison
Organised Sound, 5(3): 173-189. (Full text pdf reprint from cambridge.org)

I chaired a panel
session
on Analysis/Synthesis at the 2000 International Computer Music
Conference in Berlin. Each participant represented one of six analysis/synthesis
systems. The participants contributed a common suite of 27 input sounds, and
then each participant analyzed each sound with their system and provided the
result as an SDIF file. The panel began with a short description of each system
and then went into a group discussion on topics such as choosing sound models
and analysis parameters, interpolation of sinusoidal parameters, mutability
of sound models, morphing, and modeling of noise. This paper was written after
the panel session and incorporates the results of the discussion.

Wright, M., A. Chaudhary, A. Freed, S. Khoury, and D. Wessel.
1999. “Audio
Applications of the Sound Description Interchange Format Standard

Audio Engineering Society 107th Convention, New York, preprint #5032.

The best and most complete SDIF paper, including SDIF's history and goals,
the specification of the SDIF format and some standard sound description types,
and applications of SDIF.

Wright, M., A. Chaudhary, A. Freed, S. Khoury, and D. Wessel.
2000. “An
XML-based SDIF Stream Relationships Language
Proceedings of the
2000 International Computer Music Conference
, Berlin, Germany, pp. 186-189.

A description of an XML-based language for describing the relationships among
the various sound descriptions ("streams") in SDIF data, e.g., "this
stream came from an analysis of that stream", "these streams belong
together in a timbre space", etc.

Schwarz, D. and M. Wright. 2000. “Extensions
and Applications of the SDIF Sound Description Interchange Format

Proceedings of the 2000 International Computer Music Conference, Berlin,
Germany, pp. 481-484.

A review of new extensions to the SDIF standard, new SDIF-processing applications,
new implementations of SDIF libraries, new interfaces to SDIF libraries from
Matlab and Common Lisp, etc.

Wright, M., S. Khoury, R. Wang, and D. Zicarelli. 1999. “Supporting
the Sound Description Interchange Format in the Max/MSP Environment

Proceedings of the 1999 International Computer Music Conference, Beijing,
China, pp. 182-185. (pdf)

I wrote and maintain a set of freely-available
objects that support SDIF within the Max/MSP environment
. This paper describes
the design and use of these objects and some simple Max/MSP applications that
use them.

Wright, M. and E. Scheirer. 1999. “Cross-Coding
SDIF into MPEG-4 Structured Audio
Proceedings of the 1999 International
Computer Music Conference
, Beijing, China pp. 589-596. (pdf)

This describes the design and implementation of a freely-available
set of tools that convert SDIF data into MPEG-4 Structured Audio bitstreams
.
These allow the synthesis of SDIF data with any fully-compliant MPEG-4 decoder.

Wright, M. 1999. “SDIF Specification.” Published electronically by CNMAT: http://cnmat.berkeley.edu/SDIF/Spec.html

The formal specification for SDIF.

Wright, M., A. Chaudhary, A. Freed, D. Wessel, X. Rodet, D.
Virolle, R. Woehrmann, and X. Serra. 1998. “New
Applications of the Sound Description Interchange Format
Proceedings
of the 1998 International Computer Music Conference
, Ann Arbor, Michigan,
pp. 276-279.

The original paper on SDIF.

Standards In General

Based on my experiences with OSC and SDIF, I have been involved in
two panel sessions related to standards in general and potential future standards.

Jensenius, A. R., A. Camurri, N. Castagne, E. Maestre, J. Malloch, D. McGilvray, D. Schwarz, M. Wright.
2007.
"
Panel: The Need of Formats for Streaming and Storing Music-Related Movement and Gesture Data
". Proceedings of the 2007 International Computer Music Conference, Copenhagen, Denmark, Vol 2., pp. 711-714.

This was a discussion about representations for physical movements of musical performers, such as
those acquired by motion-capture systems.

Wright, Matthew, Roger Dannenberg, Stephen Pope, Xavier Rodet, Xavier Serra, and David Wessel. 2004. "Panel: Standards from the Computer Music Community". Proceedings of the 2004 International Computer Music Conference, Miami, FL, pp. 711-714.

I organized and moderated this panel session at the 2004 ICMC on Standards from the Computer Music Community, reviewing existing standards created by or of interest to our community, as well as provoking an interactive group discussion on possible directions for future standards-related work.

Open Sound World (OSW)

Open Sound World (OSW), is a scalable, extensible programming
environment that allows sound designers and musicians to process sound in response
to expressive real-time control. OSW combines the familiar visual patching paradigm
with solid programming-language features such as a strong type system and hierarchical
name spaces and a new more intuitive model for specifying new components. OSW
was implemented almost entirely by Amar Chaudhary as his Ph.D. dissertation
project; I was involved to a much lesser degree, mainly in the integration with
SDIF and OSC and in helping with some
of the design.

Chaudhary, A., A. Freed, and M. Wright. 2000. “An
Open Architecture for Real-time Music Software
Proceedings of
the 2000 International Computer Music Conference
, Berlin, Germany, pp. 492-495.

A brief description of OSW as a programming language and environment, including
the dataflow programming model, scripting, creation of new primitives in the
"C" language, control of time, and scheduling.

Chaudhary, A., A. Freed, and M. Wright. 1999. “An
Open Architecture for Real-Time Audio Processing Software
.” Audio Engineering
Society 107th Convention, New York, preprint #5031.

The best and most complete paper on OSW.

Neural Network Control of Additive Synthesis

Additive synthesis is an extremely powerful technique, but the classic problem
is the large number of parameters that must be controlled to produce satisfying
results. One approach to this problem is to use neural networks to control the
low-level additive synthesis parameters; a small number of input units correspond
to the parameters that the performer will control in real-time.

Wessel, D., C. Drame, and M. Wright. 1998. “Removing
the Time Axis from Spectral Model Analysis-Based Additive Synthesis: Neural
Networks versus Memory-Based Machine Learning
Proceedings of the
1998 International Computer Music Conference
, Ann Arbor, Michigan, pp. 62-65.

The typical use of additive synthesis is resynthesis of (modified) sound
models that come from the analysis of prerecorded notes or phrases. This always
involves some kind of control of the progression through the time axis of
the sound model. This paper explores an alternate approach, in which a neural
network or k-nearest-neighbors model takes time-varying control inputs (e.g.,
pitch, loudness, timbre) and produces the partial frequencies and amplitudes
based on the analyzed sound model(s).

Freed, A., M. Goldstein, M. Goodwin, M. Lee, K. McMillen, X.
Rodet, D. Wessel, and M. Wright. 1994. “Real-Time Additive Synthesis Controlled
by a Mixture of Neural-Networks and Direct Manipulation of Physical and Perceptual
Attributes.” Proceedings of the 1994 International Computer Music Conference,
Aarhus, Denmark, pp. 362-363.

This paper describes an additive synthesis system that uses neural networks
trained on analysis data to control pitch, loudness, articulation, and position
in timbre space. (Sorry; this paper is not available online.)

Studio Report

Wright, Matthew, Jonathan Berger, Christopher Burns, Chris Chafe, Fernando Lopez-Lezcano, and Julius O. Smith, III. 2004. "CCRMA Studio Report". Proceedings of the 2004 International Computer Music Conference, Miami, FL, pp. 268-271.

An overview of CCRMA (people, courses, facilities, research, music, and mechanisms for sharing work) during my first year as a graduate student there.

The ZIPI Instrumental Processor Interface (ZIPI)

ZIPI was designed to be
the replacement for MIDI. It had lots of great and well-thought-out features,
but somehow failed to take over the world. Most of these papers appeared in
the Winter 1994 issue of the Computer Music Journal and are available online
(for electronic subscribers) via the Computer
Music Journal home page
.

McMillen, K. A., D. Simon, and M. Wright. 1996. “Communications
network interface, and adapter and method therefor
United States
Patent #5,483,535
, Zeta Music Partners.

In a token ring network, this invention allows communications chips that
are designed to be slaves in a master/slave architecture to function in a
peer-to-peer manner.

Wright, M. 1995. “A Hierarchy of Perceptually Meaningful
Controls in ZIPI's Music Parameter Description Language (MPDL).” Proceedings
of the 1995 Society for Music Perception and Cognition Conference
, Berkeley,
CA, p. 14. (Abstract only)

This was a proposal to organize every kind of sound synthesis control (e.g.,
gain, filter parameters, timbre space position, simulated air pressure, etc.)
into an enormous perceptually-meaningful tree structure. Parameters with comparable
perceptual results would be siblings and more specific parameters would be
children of more general ones.

McMillen, K., M. Wright, D. Simon, and D. Wessel. 1994. “ZIPI—An
Inexpensive, Deterministic, Moderate-Speed Computer Network for Music and Other
Media.” Proceedings of the 1994 Audio Engineering Society 13th International
Conference: Computer-Controlled Sound Systems
, Dallas, TX, pp. 145-151.

A description of ZIPI targeted to the sound system control audience, particularly
emphasizing the ways that ZIPI satisfied the requirements for a local area
network for sound system control as detailed by AESSC Subgroup WG-10-1.

McMillen, K., D. Wessel, and M. Wright. 1994. “The
ZIPI music parameter description language
Computer Music Journal,
18(4): 52-73.

A description of ZIPI's application-layer protocol for music parameters,
including the addressing scheme, the distinction between synthesis control
and gestural measurements, a large collection of music control parameters
with their fixed-point numerical encodings and units, and rules for combining
parameter values sent to different levels of the three-level address tree.

McMillen, K., D. Simon, and M. Wright. 1994. “A Summary
of the ZIPI Network.” Computer Music Journal, 18(4): 74-80.

In 1994, there was no low-cost off-the-shelf networking technology capable
of satisfying the latency, bandwidth, and reliability needs of ZIPI, so ZIPI
included a token-ring network.

Wright, M. 1994. “Examples of ZIPI applications.”
Computer Music Journal, 18(4): 81-5.

Wright, M. 1994. “A comparison of MIDI and ZIPI.”
Computer Music Journal, 18(4): 86-91.

Wright, M. 1994. “Answers to frequently asked questions
about ZIPI.” Computer Music Journal, 18(4): 92-6.

McMillen, Keith, David Simon, David Wessel, and Matthew Wright. 1994. “A New Network and Communications Protocol for Electronic Musical Devices.” Proceedings of the International Computer Music Conference, Aarhus, Denmark, pp. 443-446.

Computer Science Textbook

Harvey, B. and M. Wright. 1994. Simply Scheme: Introducing
Computer Science
, Cambridge, MA: MIT Press.

This is an introductory computer science textbook using the Scheme dialect
of Lisp and intended as a "prequel" to the popular Structure
and Interpretation of Computer Programs
by Abelson and Sussman with
Sussman. The table
of contents
is available online.

Harvey, B. and M. Wright. 1999. Simply Scheme: Introducing
Computer Science
, 2nd ed. Cambridge, MA: MIT Press.

The second edition is very similar to the first; the only major change was
removing dependencies so that instructors could choose to teach recursion
before or after higher-order functions.

Discography

Wright, M. 1993. "The
Complete Bill Frisell discography
." Originally published as a post
to the newsgroup rec.music.bluenote.

My first publication was this enormous, then-comprehensive, annotated discography
of the recorded works of improvising guitarist and composer Bill Frisell.