

> General reflection (not to mention runtime reflection) is probably the way to go. Of course the existing algorithms will stay uglier, but they'll also keep working with new environments (and, as far as possible, without a runtime perf. The only thing that one will have to do is map the new "shape" in these proxies (which are as far as possible doing their stuff at compile-time) and all the existing backends will keep working with the algos defined in the better way. More future-proof? Only if this is the one true form of reflected code that gets adopted.Īh, actually not :) the library tries to do things in two steps in many places:ġ/ map the user's code to a proxy depending on which concepts it conforms to: for instance whether your audio processing function is written per-sample (that's still in progress tho):įloat operator()(float input)
#Github soundflower software
My end goal for this is that when I make an object for the main software I'm developing, ossia score ( ) then the whole media arts community can benefit :-) I wanted a pure-C++ thing instead, which allows to call directly native code, and enables more than just generic audio processing: unlike Faust and SOUL (last time I checked) it's possible to make a message-based object for Pd or Max, not just an audio filter or synthesizer. ) but they both are domain-specific languages with their own compilers. Related projects are Faust ( ) and SOUL (.

#Github soundflower portable
This way the algorithms will still be useful in 10 years when everyone has moved to API N+1, unlike a metric ton of existing audio software which depends on a specific audio / media-object API for no good reason (today ! When they were written C++ wasn't advanced enough to allow this at all)Įtc etc, there's a couple hundred of those, which always depend on some API and are thus not easily portable across environments: if tomorrow you want to make an audio software and want to use one of the, say, VCVRack plug-ins you're going to have to bring the whole VCV run-time API along. Here the idea is to write the algorithms in a way that is more future-proof, by not having them to depend on any run-time API, just a generic specification (given as a set of C++20 concepts).

If you wrote a software that worked with soundflower it means that at some point you used to call either the CoreAudio API directly or any abstraction on top of it (RtAudio, PortAudio.
