Fellow Csounders!
If I want to make a simple WebAssembly Audio Worklet with Csound and use it with other Web Audio Nodes, what would be the right workflow?
Currently I’m doing it as following:
- create new AudioContext
- using this context create Web Audio Nodes
- Initialize those Web Audio Nodes
- using that same context create CsoundObj
- using this CsoundObj compile some orc string e.g.:
nchnls=1 0dbfs=1 instr 1 ;ain1 in ain1 = poscil:a(0.2, 600) out ain1 endin schedule(1,0,-1);
- start that CsoundObj
- get Csound Web Audio node from that CsoundObj using getNode() method
- disconnect() that node from default context output
- using this CsoundObj compile some orc string e.g.:
- connect nodes
- start generator nodes like oscillators
Do I understand this correctly?
I’m writing this in Rust and, since I’m still new with Rust’s wasm-bindgen, it’s a bit messy but I hope you can understand it…
let ctx = web_sys::AudioContext::new()?;
// Create our web audio nodes.
let primary = ctx.create_oscillator()?;
let gain = ctx.create_gain()?;
// Initialize them
primary.set_type(OscillatorType::Sine);
primary.frequency().set_value(440.0); // A4 note
gain.gain().set_value(0.3); // starts muted
//-------------------------------------
// Csound integration
//-------------------------------------
let csound_params = js_sys::Object::new();
js_sys::Reflect::set(&csound_params, &JsValue::from_str("audioContext"), &ctx.clone().into())?;
let csound_object = get_csound_obj(csound_params).await;
// -------------------------------------
let orc = r#"
nchnls=2
0dbfs=1
instr 1
ain1, ainr ins
out ain1, ainr
endin
schedule(1,0,10);
"#;
// compile orc
let compile_orc_promise = csound_object.compileOrc(orc);
JsFuture::from(compile_orc_promise).await?;
// start csound
let start_promise = csound_object.start();
JsFuture::from(start_promise).await?;
// get csound node
let csound_node = csound_object.getNode();
let csound_node = JsFuture::from(csound_node).await?;
let csound_node: AudioNode = csound_node.into();
console::log_1(&csound_node);
//-------------------------------------
// Connect the nodes up!
primary.connect_with_audio_node(&csound_node)?;
csound_node.connect_with_audio_node(&gain)?;
// Then connect the gain node to the AudioContext destination (aka
// your speakers).
gain.connect_with_audio_node(&ctx.destination())?;
// Start the oscillators!
primary.start()?;
......
When I print on console my csound_node I’m getting this:
AudioWorkletNode {parameters: AudioParamMap, port: MessagePort, onprocessorerror: null, context: AudioContext, numberOfInputs: 1, …}
channelCount: 2
channelCountMode: "max"
channelInterpretation: "speakers"
context: AudioContext {baseLatency: 0.01, outputLatency: 0, onerror: null, sinkId: '', onsinkchange: null, …}
numberOfInputs: 1
numberOfOutputs: 1
onprocessorerror: null
parameters: AudioParamMap {size: 0}
port: MessagePort {onmessage: null, onmessageerror: null, onclose: null}
[[Prototype]]: AudioWorkletNode
And the problem is that I don’t get any sound when I’m using csound node. Any ideas?