I invented /dev/OSC to see if we could avoid the OS/X operating system latencies delivering OSC packets into Core Audio signal processing. The two halves of it were implemented by Rimas (driver) and Matt (Max external). Although we confirmed that it was functional it wasn't until some recent latency measurements using Andy's tools that we confirmed that it is the fastest path of many we tried from gesture sources to audio output. We will write a paper on this at some point but hopefully Andy can attach some of the graphs to this post to illustrate this.

I believe we can do even better than the current implementation by avoiding the read(2) system call to /dev/OSC using a lock-free ring buffer that the user process owns and the kernel writes into directly (at primary interrupt time). This is interesting to explore for our various applications and as a general tool for the parlab work. Apple has given us access to a high priority thread (core audio) and a way to get audio and (possibly) MIDI in and out of it. What is missing is a low latency way to get other I/O in and out. We have faked this by embedding our data in audio. We can do better with a streamlined /dev/OSC. This would allow us to emulate a "bare metal" view of things without throwing out the OS and more importantly without throwing away or rebuilding Max/MSP.