Noise when changing deltapi

Hi,

I’m working on a reverb effect, which is based off a FDN reverb by Sean Costello. When making bigger changes to the delay tap, it creates a “bitcrush”-like sound, which is undesirable (in this case). I’ve tried smoothing this out using tonek or port, but it still is very noticable.

Does anyone have any tips on how I could improve this? Perhaps there are any other delay-opcodes that are more suitable for this?

Here is a simplified example of the issue that can be exported from Cabbage and opened in a DAW:

<Cabbage> 
form caption("DelayNoise") size(200, 200), pluginId("DLNS"), guiMode("queue"), presetIgnore(1)
rslider bounds(36, 26, 60, 60), channel("Modulation") range(0, 1, 1, 1, 0.001), text("Modulation") textColour(0, 0, 0, 255)
rslider bounds(114, 28, 60, 60), channel("Mix") range(0, 1, 1, 1, 0.001), text("Mix") textColour(0, 0, 0, 255)
rslider bounds(78, 96, 60, 60), channel("Diffusion") range(0.01, 1, 1, 1, 0.01), text("Diffusion") textColour(0, 0, 0, 255)

</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d
</CsOptions>
<CsInstruments>
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
kmix, kmix_Trig cabbageGetValue "Mix"
kpitchmod, kpitchmod_Trig cabbageGetValue "Modulation"
kdiff, kdiff_Trig cabbageGetValue "Diffusion"
kdiff  port kdiff, 0.01
;kdiff tonek kdiff, 10

a1 inch 1

kdel1 = ((3007/kdiff)/sr)

k1 randi .001, 3.1, .06

adum1   delayr  1
adel1   deltapi kdel1 + k1 * kpitchmod
        delayw  a1

asigXfadeMix = a1 * sqrt(1-kmix) + adel1 * sqrt(kmix)

outs asigXfadeMix, asigXfadeMix

endin
</CsInstruments>
<CsScore>
i1 0 [60*60*24*7] 
</CsScore>
</CsoundSynthesizer>

It’s hard to avoid clicks in this instance because you changing the table size while reading from it. Windowing the delay line would work best, but unless I’m mistaken this can’t be done with the delay opcodes. You would need to write your own delay using a function table and the table read/write opcodes.

A less involved, but also less robust way to do this is to mute the input whenever you change the delay time. So listen out for changes to the delay time slider. If there is a change to the delay time mute the input by ramping quickly to 0. Then use a simple timer or k delay to check that the user has stopped moving the slider, and if so ramp back up to 1.

Finally, check some of Iain’s delay instruments. As far as I know they let you change delay times without zipper noise.

Thanks Rory. The zipper noise itself is not a problem, it’s this very noticable bitcrush noise that occurs, but I’m guessing the cause of both sounds is the same. Going to take a look at Iain’s delay instruments to analyze what he’s doing :+1:

You can upsample the control slider to a-rate and then perform a low pass filter on that signal before it heads to the delay line. This will remove the glitches, but it will introduce a slight detuning effect when the slider is moving.

<Cabbage>
form caption("Untitled") size(400, 300), guiMode("queue") pluginId("def1")
rslider bounds(296, 162, 100, 100), channel("gain"), range(0, 1, 0, 1, .01), text("Gain"), trackerColour("lime"), outlineColour(0, 0, 0, 50), textColour("black")
rslider bounds(16, 12, 97, 83) channel("rslider1") range(0, 1000, 10, 1, 0.001)
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d 
</CsOptions>
<CsInstruments>
; Initialize the global variables. 
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
    kSlider cabbageGetValue "rslider1"
    aDelay upsamp kSlider
    aDelay tone aDelay, 2
    kGain chnget "gain"
    a1, a2 diskin2 "pianoMood.wav", 1, 0, 1
    aDel vdelay (a1+a2)/2, aDelay, 2000
    outs (a1+aDel)*kGain, (a2+aDel)*kGain
endin

</CsInstruments>
<CsScore>
i1 0 [60*60*24*7] 
</CsScore>
</CsoundSynthesizer>

Thanks for the suggestion Rory, but I’m still experiencing the noise if I insert the fix into my first example. :disappointed_relieved:. However, I came up with a solution today that is inspired by your muting suggestion:

<Cabbage> 
form caption("DelayNoise") size(200, 200), pluginId("DLNS"), guiMode("queue"), presetIgnore(1)
rslider bounds(36, 26, 60, 60), channel("Modulation") range(0, 1, 1, 1, 0.001), text("Modulation") textColour(0, 0, 0, 255)
rslider bounds(114, 28, 60, 60), channel("Mix") range(0, 1, 1, 1, 0.001), text("Mix") textColour(0, 0, 0, 255)
rslider bounds(78, 96, 60, 60), channel("Diffusion") range(0.01, 1, 1, 1, 0.01), text("Diffusion") textColour(0, 0, 0, 255)

</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d
</CsOptions>
<CsInstruments>
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
kmix, kmix_Trig cabbageGetValue "Mix"
kpitchmod, kpitchmod_Trig cabbageGetValue "Modulation"
kdiff, kdiff_Trig cabbageGetValue "Diffusion"

a1 inch 1

kdiff portk kdiff, 0.01
if changed2:k(kdiff)==0 then
    kdiff_2 = kdiff
endif

kdel1 = ((3007/kdiff_2)/sr)

k1 randi .001, 3.1, .06

adum1   delayr  1
adel1   deltapi kdel1 + k1 * kpitchmod
        delayw  a1

asigXfadeMix = a1 * sqrt(1-kmix) + adel1 * sqrt(kmix)

outs asigXfadeMix, asigXfadeMix

endin
</CsInstruments>
<CsScore>
i1 0 [60*60*24*7] 
</CsScore>
</CsoundSynthesizer>

It produces just one click when adjusting the “diffusion”-knob, which is far less dramatic than the noise I had earlier. Any thoughts on potential improvements?

You’re still changing the delay time with a k-rate signal. Upsample kdiff, and then apply the filtering to smoothen the signal. Then use the a-rate signal to control the delay time rather than the k-rate one which will only change values every 32 samples.

Here is an example:

<Cabbage>
form caption("Untitled") size(400, 300), guiMode("queue") pluginId("def1")
rslider bounds(296, 162, 100, 100), channel("gain"), range(0, 1, 0, 1, .01), text("Gain"), trackerColour("lime"), outlineColour(0, 0, 0, 50), textColour("black")
rslider bounds(16, 12, 97, 83) channel("rslider1") range(0, 1, 0.01, 1, 0.001)
</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d 
</CsOptions>
<CsInstruments>
; Initialize the global variables. 
ksmps = 32
nchnls = 2
0dbfs = 1


instr 1
kSlider cabbageGetValue "rslider1"
aDelay upsamp kSlider
aDelay tone aDelay, 2
kGain chnget "gain"

a1, a2 diskin2 "pianoMood.wav", 1, 0, 1

;aDel vdelay (a1+a2)/2, aDelay, 2000

adum1   delayr  1
aDel   deltapi aDelay
        delayw  a1

outs (a1+aDel)*kGain, (a2+aDel)*kGain
endin

</CsInstruments>
<CsScore>
;causes Csound to run for about 7000 years...
f0 z
;starts instrument 1 and runs it for a week
i1 0 [60*60*24*7] 
</CsScore>
</CsoundSynthesizer>

Thanks, I understood now what you meant with “I was still changing the delay time with a k-rate signal”. This actually works perfect with the example I first provided, BUT there is a EXTREMELY important change that needs to be done on the example.

In the example, when using the upsampling method, and turning kdiff to lower values, which in turn creates longer delay taps, something created insanely loud audio. The dB-meter in Reaper measured sounds up to +754.1 dBFS (insane right?) and automatically muted itself. By increasing the length of the delay line from: adum1 delayr 1 to adum1 delayr 10 , this completely solves the issue, but still it was a pretty scary thing.

Here’s the final example which works perfectly:

<Cabbage> 
form caption("DelayNoise") size(200, 200), pluginId("DLNS"), guiMode("queue"), presetIgnore(1)
rslider bounds(36, 26, 60, 60), channel("Modulation") range(0, 1, 1, 1, 0.001), text("Modulation") textColour(0, 0, 0, 255)
rslider bounds(114, 28, 60, 60), channel("Mix") range(0, 1, 1, 1, 0.001), text("Mix") textColour(0, 0, 0, 255)
rslider bounds(78, 96, 60, 60), channel("Diffusion") range(0.01, 1, 1, 1, 0.01), text("Diffusion") textColour(0, 0, 0, 255)

</Cabbage>
<CsoundSynthesizer>
<CsOptions>
-n -d -+rtmidi=NULL -M0 -m0d
</CsOptions>
<CsInstruments>
ksmps = 32
nchnls = 2
0dbfs = 1

instr 1
kmix, kmix_Trig cabbageGetValue "Mix"
kpitchmod, kpitchmod_Trig cabbageGetValue "Modulation"
kdiff, kdiff_Trig cabbageGetValue "Diffusion"

a1 inch 1

kdel1 = ((3007/kdiff)/sr)

adel1 upsamp kdel1
adel1 tone adel1, 2

k1 randi .001, 3.1, .06

adum1   delayr  10
adel1   deltapi adel1 + k1 * kpitchmod
        delayw  a1

asigXfadeMix = a1 * sqrt(1-kmix) + adel1 * sqrt(kmix)

outs asigXfadeMix, asigXfadeMix

endin
</CsInstruments>
<CsScore>
i1 0 [60*60*24*7] 
</CsScore>
</CsoundSynthesizer>

So I just discovered the linear interpolation opcode, and it seems to be a alternative to upsampling to fix this:

adel1 interp kdel1

Rory, I sort of understand what upsampling and interpolation is in theory, but could you maybe explain what the difference is between them here? Maybe a small drawing in Paint or something to illustrate?

k-rate signals are only updated every ksmps samples so they will always appear as a staircase of sorts. The red line below represents what a typical k-rate signal might look like, while the grey line represent the audio rate signal.

Interpolating the k-rate signal will smoothen the signal. I imagine that upsamping and low-passing the signal is more or less the same as what the interp opcode does.

1 Like