Group instrument instances

Hello. I’m new to Csound and this is my first post on this forum.

I want to develop a JS library with some syntactic sugar on top of Csound. In the library I’d like to make it possible to define an instrument and then be able to create a node, corresponding the instrument. And that the node would be able to connect to other nodes despite the order of definition. For example:

// the equivalent of instrument definitions in Csound
ndef InstrumentName(someparam) { ... }
ndef AnotherInstrument() { ... }

// then I can instantiate a node
let instr1 = InstrumentNode();
let instr2 = AnotherInstrument();

// then I can connect nodes, forming a graph
instr1.out(instr2);

// and I can set parameters for a node (p4, p5, etc. automatically binded in the ndef)
instr1.set("someparam", 42);

// I can form another graph using the same instruments and setting 
// parameters for them independently
let instr3 = InstrumentNode();
let instr4 = AnotherInstrument();
instr4.out(instr3);

// there will be also patterns (i.e. event generators) which you can set independently 
// per nodes. So you can run the same instrument, but using different patterns in 
// parallel
instr1.set("someparam", /*pattern generator, producing events*/);
instr3.set("someparam", /*pattern generator, producing different events in parallel*/);

So far I found subinstr. So I could make a graph, which would be another instrument definition, and use the instruments (i.e. nodes) inside it using subinstr. But I’m not sure whether it’s performant enough. Is there any performance cost for using subinstr? Or maybe there’s a better way to do this? Also, what do you think about using UDOs as node definitions instead of instruments? And then graphs could be instruments using those UDOs. Is there any difference comparing performance between instruments and UDOs?

Don’t know anything about writing opcodes natively for now, but I have experience with Rust, C/C++ and audio, so I’m ok if a solution requires writing a “native” opcode and open to any suggestions.

I’m pretty sure I’m not the best to advise on this, @stevenyi will be able to provide better advice I feel. Personally I’d stay clear of subinstr. I’ve found it to be a little resource hungry. I would probably make each node an instrument, with IOs defined as unique named channels. Then a main mixer node to control all the various outputs. I’m probably over simplifying things a bit.

Some further reading:
http://csound.1045644.n5.nabble.com/Csnd-Modular-FX-setup-in-Csound-td5769211.html#a5769217

Thanks! I’ll check it. What do you think about using opcodes as node definitions and then wrap them into instruments on instantiation? I mean, automatically define an instrument when a new node needed.

This seems reasonable, but I don’t think it is possible to do this on an opcode level without some serious work. Steven did something before on using opcodes like this, but it was seriously low-level stuff as far as I can recall.

There is also the option of modifying the Csound graph in realtime, although I’ve not done it. Check the tree methods. Although I’m not sure these methods have been exposed in web Csound.

Oh, I was unclear. That’s for Deno runtime, so I’m going to make JS bindings on Rust and able to make changes on this level. This is for desktop at the first place.

I’m going to translate JS AST inside opcode/node definitions (which will be represented as a special function) into the Csound AST. Calling opcodes will also have a special syntax, but everything else will be JS as it is. What I mean is then reference the opcode in a node initializer. Something like:

let node = Node("opcode"); // here will be a special syntax to reference opcode

Then on the Csound side, the JS’s Node("opcode") will be something like:

instr 1 ; the number will be generated automatically
outleta opcode p4, p5 ; arguments will be mapped automatically, sorry if there are mistakes, I'm new to Csound
endin

The instrument will include inlets and outlets to connect to other nodes. The variable node on the JS side will keep an object with the reference to the instrument (i.e. its number). When we need to move a node, we’ll change the number of instrument. If we initialize a new node, a new instrument with a new number is created. Is this a normal way to go in Csound and whether it’s performant enough? Also, is it scalable? I mean, what if there will be a lot of nodes? I’m still open for other ideas if there’s something better.

The main idea is making it simple to process instruments with effects, while been able to define the instruments and effects at runtime, and control any parameter of any instrument or effect in the chain by a static value or a pattern (events generator). I’d also like to make it possible to control parameters by mapping opcodes on them, but I need to figure out static values and patterns for now :sweat_smile: I’m open to ideas how to implement it with the opcode mapping though.

Your project sounds interesting. Hopefully Steven will drop in at some point. I think he could advise you better than I :wink:

Greetings @alestsurko!

Both of these use cases for free graph formation of instruments or opcodes are not supported well in CS6. They are, however, both features targeted for CS7. Of these, the free use to inter-connect opcodes on their own is semi-complete. I had done a project some years ago that allowed reusing Csound opcodes in the system Aura; Jari Kleimola recently built upon that work and was able to make an API to interconnect opcodes in the browser with JS.

That said, in the short term, I’m not sure how well this would work as it would depend on the number of instances, but you could try instantiating an instance of csound per processing unit you want to add to your graph and using getNode() to interconnect the webaudio node’s together to form your graph. (Or, you might get in touch with Jari to see about using the version of Csound he’s been working on; I’ll contact him to direct him to this thread if he has a free moment to respond.)

We have been for a long while talking about CS7 in the community but I think we should be able to finally start moving that forward after we get the new version of Webaudio Csound out. At that time, I’m hoping to get Jari’s work integrated into CS7 as the UGen API (Add UGen API · Issue #407 · csound/csound · GitHub).

Thank you very much, folks for your attention! I confused you saying it’s a JS library. It’s for Deno, there’s no web audio API for Deno yet, and I wasn’t going to use it anyway. It’s for desktop. I’m going to run the audio loop on the Rust side. So the major part of communication with Csound API will be on the Rust side (i.e. it’s based on C API). Having this said, wrapping opcodes into instruments at runtime to connect them to each other is still a bad idea? Or there’s something about the relation between C and Webaudio APIs which is important related to my issue?

Since when I wrote the last answer, I’ve found a new possible way to do this. Using mixer or zak opcodes to automatically assign inputs and outputs to the wrapper for the opcodes (i.e. instruments). So I could easily switch channels instead of tracking instruments numbers and change those numbers. Going this way, should I still bother about the order of definition for instruments? And is it performant enough? Saying, what if I’d have hundreds or thousands of such opcode wrappers (instruments + zak/mixer channels)?

Yes maybe not an issue for webaudio APIs (though the csound wasm binary can be run in Node, not sure if that’s of interest for Deno if you can run libcsound via C API directly; perhaps @hlolli might comment).

For interconnection, there’s always the issue of Csound’s instrument order for performance. Some thoughts:

  • Do you need to dynamically reconfigure the graph at runtime? If so, would it work for your use case to re-gen csound code and have csound recompile it? (i.e., what we do with live coding). If that is the case, you can always represent code as nodes in JS, gen the code in any order as a single instrument, then recompile. It will potentially cause a glitch in this case as state isn’t preserved when moving between one instance of an instrument and a new instance of an updated instrument. It would give flexibility on order of operations however. (I think the Anton’s Haskell csound-expression library works this way: csound-expression: library to make electronic music).

  • Rather than zak, you could use the channel system to communicate between instrument instances. This would give some flexibility to name values and you can dynamically create channel names to assist here so that instances of an instrument can read/write to different channels.

  • Some I’ve done with live coding is have instruments numbered all write output to a bus (via channels), and have one mixer instrument at the end handle mixing and effects. If that matches up with your use case, you can compile/recompile instruments with lower numbers and have their outputs write to the bus for the mixer. Any changes in mixer graph (i.e., inserting new effects, adding sidechaining, etc.) would require a recompile of the mixer however. Effects instruments with numbers between the sources and mixer could be an option to modify at run-time.

I think knowing a little more about what your typical graph might look like and what you’ll need for graph changes while running might spur some other thoughts too. (Maybe I’m misinterpreting the OP?)

1 Like

Do you need to dynamically reconfigure the graph at runtime? If so, would it work for your use case to re-gen csound code and have csound recompile it? (i.e., what we do with live coding). If that is the case, you can always represent code as nodes in JS, gen the code in any order as a single instrument, then recompile. It will potentially cause a glitch in this case as state isn’t preserved when moving between one instance of an instrument and a new instance of an updated instrument. It would give flexibility on order of operations however. (I think the Anton’s Haskell csound-expression library works this way: csound-expression: library to make electronic music ).

Yes, I need to be able to change nodes ordering. That would work, and this is the exact solution what I was asking about. I’d like to find a better way instead of recompiling the instruments, though. I’ll provide a detailed example of my use case below.

Rather than zak, you could use the channel system to communicate between instrument instances. This would give some flexibility to name values and you can dynamically create channel names to assist here so that instances of an instrument can read/write to different channels.

Actually, indexing by numbers would be more suitable for my use case as with strings I need to generate UIDs and then there may be situations when I need to reuse them. I could use stringed indexes though (“0”, “1”, “2”, etc.), still converting numbers from/into strings is a little overhead.

Some I’ve done with live coding is have instruments numbered all write output to a bus (via channels), and have one mixer instrument at the end handle mixing and effects. If that matches up with your use case, you can compile/recompile instruments with lower numbers and have their outputs write to the bus for the mixer. Any changes in mixer graph (i.e., inserting new effects, adding sidechaining, etc.) would require a recompile of the mixer however. Effects instruments with numbers between the sources and mixer could be an option to modify at run-time.

This describes partly my use case, but if it would be possible without recompiling, I’d rather to go that way. I was also thinking about changing instrument numbers instead of recompilation, but that would be hardly to manage as there could be a lot of nodes and when you needed to reconnect one part of the graph to another, you’d need to change all the indexes, still keeping in mind the relation to the others…

I’ll try to provide a more detailed design by examples. This is very unstable for now and I’m open to your suggestions. As I’m new to Csound, please consider below more like a pseudocode.

// you are able to define your nodes (ugens) using JS this way
ndef SomeNode(amp, freq) {
    let sinosc = ~oscili.a(amp.k, freq.k);
    ~xout.a(sinosc);
}

// the AST of above code then translated into Csound AST,
// which is an equivalent that you have in the result of
opcode SomeNode, a, kk
kAmp, kFreq xin
aSig oscili kAmp, kFreq
xout aSig
endop

// the tree then compiled by Csound
// now I can init a node, with something like
let node = Node("SomeNode");

// and suppose we have an opcode (= ndef) named "reverb"
let reverb = Node("reverb")

// now I'm able to connect nodes
node.out(reverb);

// and to connect it to output
let out = Node("out");

reverb.out(out);

// the Node(...) is initializer for Node object. Which will be 
// responsible for compiling  a new instrument-wrapper for 
// the opcode, assigning it a number and input/output channels,
// managing connections and parameter changes, etc.

// as I see it now, the node instantiation, in Csound could be 
// something like
instr 42 ; automatically generated number on JS side
iOutChNum p4 ; the index of output channel
aSig SomeNode p5, p6
outlet aSig, iOutChNum ; don't know what goes here for now
endin

// so this way I'll be able to change the output channel index, 
// without recompiling the instrument. So if, for example, I
// wanted to connect it's the other way
reverb.out(node); // yeah, that's a bad example... 
// suppose `node` automatically disconnected and all we 
// need to do now, is changing the output/input channel for
// instrument. We can keep the state of the node in the 
// Node object on the JS side

// next I want to be able to control any parameter of a node by
// either: a static value, a pattern or another node. But it 
// seems like I already figured out this part (except 
// modulating by another node, but anyway)

node.set("freq", 440);

// or with a pattern (generates an event each 0.5 of a beat, 
// changing "freq" parameter)
node.set("freq", pbind(0.5, pseq([220, 440, 880], inf)));

// or with a node/opcode
node.set("freq", Node("SomeNode", 200, 10));
// or node.set("freq", ~SomeNode.k(200, 10)); 
// - it depends on implementation

// and it's important that you can define many nodes of 
// the same type and use them in different or the same
// graphs
let node2 = Node("SomeNode");

node2.out(node); // another bad example...

I’m developing this library for a DAW-like application, where you can control anything using code. Kinda live coding DAW. So there will be a mixer also, as we have usually in a DAW. The mixer may have a lot of channels and a lot of effects could be inserted in the channels, so I wouldn’t like to recompile all this stuff when an effect ordering changed on a channel, for example. And I’d like to make the library usable on its own, so it’s better to view it separately from the app (there may not be the mixer in the library), but keeping in mind scalability. I still don’t feel like I explained it completely, but at least I tried, and I hope now you better understand my use case :smile:

The main question now is node instantiation and connections between nodes. Keep in mind, that I’ll manage the state of instruments/nodes on the JS side. That means, I can keep (= have access to) the index of output/input channels, the index of the instrument, etc.


EDIT:

It looks like both channels and zak good for this. I tested zak with thousands of channels, and it performed quite well. But I’ll go with channels, because if I use zak the users couldn’t use it for their nodes/opcodes. Thanks everyone for your help!

Sounds like you’ve worked out a prototype, nice! I think the UGen API will end up being a better fit once we can get it going with CS7 for realtime node-based graph editing. Look forward to seeing how your project turns out! :smile:

1 Like

hi @alestsurko, your use case sounds very cool. like @stevenyi wrote, i’ve been working with a CS6 fork to support opcode, instrument and udo instantiation as “standalone” components. the library works with arate, krate and frate signals, and so far it has c/c++ as well as js/audioworklet bindings. the lib is based on Steven’s Aura work, and we are starting to collaborate on the thing to ensure it fits in with the new CS7 model.

i’m attaching a simple c++ example below, ripped off from a JUCE plugin. JS API is asynchronous, but luckily i managed to proxy out the awaits and make it look more streamlined.

// -- setup, eg. in JUCE prepareForPlayback
// -- multiple contexts (== csound engine instances) are supported, but usually just one is enough
csound::Context cs;
cs.init(sampleRate, samplesPerBlock, nchannels);
cs.compileScript(udoScript); // optional, extend opcode set with a UDO

// -- instantiate opcodes/instruments/udos
// -- below, vco2 uses a default signature, moog specifies a specific polymorph
// -- udo and instr are defined in csound language
csound::Opcode* vco2 = cs.createOpcode("vco2", vector<MYFLT>({ 0, 220 }));
csound::Opcode* moog = cs.createOpcode("moogladder", vector<MYFLT>({ 0, 15000, 0.5 }), "a.akkp");
csound::Opcode* udo  = cs.createOpcode("udoGain", vector<MYFLT>{ 0,.5 });
csound::Instrument* inst = cs.createInstrument(id, instrScript, vector<MYFLT>({ 0.5, 100, 0 }));
...

// -- render, eg. in JUCE processBlock
// -- alternatively the instances could be added into host framework dsp graph
cs.processMidi(midiBytes);  // tick krate opcodes and process midi inputs
vco2.process(outputs,nsamples);
moog.process(outputs,outputs,nssamples);
udo.process(outputs,outputs,nsamples);
inst.process(outputs,outputs,nsamples);
...

// -- params
moog->setParam(1,f_cutoff);
moog->setParam(2,f_resonance);

Hi! Looks interesting! What’s the state of it? And is it open source? Does it able to handle events?

it will be open source after a code review: my knowledge about csound engine internals is limited and although i did not modify any existing code in there, the extentions i wrote may well be inefficient. the c/c++ part is now done, whereas JS bindings require still a cleanup. i’ve not done systematic testing but all opcodes i’ve tried so far seem to work fine. bugs are likely though.

if by events you mean score events, the p-fields are currently simply modeled as indexed instrument params. instruments expose setParam(int,MYFLT) and setParams(vector) methods.

i will keep you updated once the codebase is at github.

if by events you mean score events, the p-fields are currently simply modeled as indexed instrument params. instruments expose setParam(int,MYFLT) and setParams(vector) methods.

I meant voicing actually. If I understand it correctly, in Csound when you send an event a new instance of an instrument instantiated. But your API looks lower level, so it seems like it’s more for DSP chaining, and things like voices instantiations should be implemented manually, right?

the c/c++ part is now done, whereas JS bindings require still a cleanup.

I’m mostly interested in C/C++. Especially C, since for C++ I’d need to write C wrappers anyway. My library is for desktop. For now, I don’t have plans to make it able to run in the browser. And this is for Deno runtime, so the native part is in Rust and Web Audio (and AudioWorklet) isn’t available. This is just for a case :slightly_smiling_face:

i will keep you updated once the codebase is at github.

Thanks! Looking forward!

I meant voicing actually. If I understand it correctly, in Csound when you send an event a new instance of an instrument instantiated. But your API looks lower level, so it seems like it’s more for DSP chaining, and things like voices instantiations should be implemented manually, right?

yes you got it exactly. it is lower level and in a synth use case voice allocation needs to be handled externally. instruments enable hybrid constellations where rendering is done partly in csound dsp graph and partly in the hosting environment. but that does not automate voice allocation either.

i might still manage to enable voice instantiation inside the csound graph. need to explore more. that would disable further voice specific processing in the hosting environment though, since the voices are mixed together inside the csound graph.

That would be great if that’d be possible to send events to the Csound engine and it set up voices and parameters in a thread safe manner. So you could, for example, just send events to the engine from everywhere, including GUI. But as far as I understood, this is out of scope of your library.