(There are also a variety of graphical point-and-click languages such as Max MSP and PureData that also descend from the Music N paradigm, but I'm interested only in text-based languages for the sake of this question.) CSound could be seen as the last of the "traditional" Music N languages, which focus mainly on sound synthesis, whereas ChucK and SuperCollider add flexible tools for composition as well. You can trace the threads from there to a variety of languages that were developed in the 90s, including CSound, ChucK and SuperCollider. The history of music and sound synthesis languages can be traced back to the Music N languages starting in the 1950s. It is not currently accepting new answers or interactions. This question and its answers are locked because the question is off-topic but has historical significance. Pretty handy for embedded systems, interactive art installations, internet-of-things or other heterogenous systems.Locked. We could in the same way wire up another FBP runtime, for instance use MicroFlo on Arduino to integrate some physical sensors into the system. In above example sndflo runs on a Raspberry Pi, and is then used as a component in a NoFlo browser runtime to providing a web interface, both programmed with Flowhub. One can export ports in one runtime, and then use it as a component in another runtime, communicating over one of the supported transports (typically JSON over WebSocket). Sndflo also implements the remote runtime part of the FBP protocol, which allows seamless interconnection between runtimes. For instance setting up a audio pipeline visually using Flowhub+sndflo, then using the Event/Pattern/Stream system in SuperCollider to create an algorithmic composition that drives this pipeline.īecause a web browser cannot talk OSC (UDP/TCP) and SuperCollider does not talk WebSocket a node.js wrapper converts messages on the FBP protocol between JSON over WebSocket to JSON over OSC.
PYTHON SUPERCOLLIDER CODE
This is to make it easier for those familiar with SuperCollider to understand the code, and to facilitate integration with existing SuperCollider code and tools. The sndflo runtime is itself written in SuperCollider, as an extension. Simple substrative audio synthesis using sawwave and low-pass filter Creating Synths components (the individual effects) as a visual graph of UGen (primitives like Sin,Cos,Min,Max,LowPass) components is also within scope and planned for next release. There are several known issues and limitations, but it has now reached a minimally useful state. It exposes Synths as components, which are be wired together using Busses (edges in the graph), allowing to build audio effect pipelines. On the contrary sndflo is very focused and opinionated. An extreme example, here is an album of SuperCollider pieces composed with <140 characters (+ an analysis of some of them). There is also a tendency to favor very short, expressive constructs (often opaque). Lack of (well documented) best practices. A lot of time was spent wrestling with SuperCollider, due to the number of new concepts and myriad of ways to do things, and
We used SuperCollider for Piksels & Lines Orchestra, a audio performance system which hooked into graphics applications like GIMP, Inkscape, MyPaint, Scribus - and sonified the users actions in the application. Growing list of runtimes that Flowhub can target