I’m looking for suggestions to make a live spectrogram visualisation of my output buffer in a csound API.
Do you have some ideas?
I think ffmpeg could do it, but probably I’d have to pass through a routing dac (i.e. jack, soundflower, blackhole…) and I want to read directly my output buffer from the csound API.
Do you know if I could pass directly to ffmpeg a buffer?
Thank u
Unfortunately I don’t have an example of converting a outputBuffer() to a openCV matrix type but this would be the way I would do it if I needed to save videos of the data that I’m plotting. But if you are ok with just plotting (animating) stuff without saving it in the video file, than as @rory mentioned, you could directly draw output of outputBuffer() on i.e. matplotlib graph.
You can write the FFT data to a table using pvs2tab, then use csoundTableCopyOut to copy the frequency data to your frontend. Then draw it. No need to mess with output buffers, or implement your own FFT routines.
import ctcsound as csound
from matplotlib import pyplot as plt
from matplotlib import animation
import numpy as np
fftSize = 1024 # this needs to match gifftsiz in orc
fs = 48000 # sample rate
orc = '''
0dbfs = 1
;general values for fourier transform
gifftsiz = 1024 ; this needs to match above defined fftSize
gioverlap = 256
giwintyp = 1 ;von hann window
;an 'empty' function table
giTable ftgen 1, 0, -(gifftsiz+2), 2, 0
instr 1
aout oscili 0.5, 500
aout *= linseg(0, 1, 1, p3-1, 0)
out aout
kArr[] init gifftsiz+2 ;create array for bin data
fsig pvsanal aout, gifftsiz, gioverlap, gifftsiz, giwintyp
kflag pvs2array kArr, fsig ;export data to array
copya2ftab kArr, giTable
endin
'''
sco = "i1 0 50\n"
cs = csound.Csound()
cs.setOption('-odac')
cs.compileOrc(orc)
cs.readScore(sco)
cs.start()
pt = csound.CsoundPerformanceThread(cs.csound())
pt.play()
fig, ax = plt.subplots()
ax.set(xlim=(0,fs/2), ylim=(0,1))
line, = ax.plot([], [], lw=2)
f_axis_delta = fs/fftSize
f_axis = np.arange(0,fs/2 + f_axis_delta,f_axis_delta)
fftArray = np.zeros(fftSize + 2) # array length must be fftSize+2
def animate(i, f_axis_cs=[], amp_vals=[]):
cs.tableCopyOut(1,fftArray)
#f_axis_cs = fftArray[1::2] # theoretically this should be the same as f_axis above defined but it's not
amp_vals = fftArray[0::2] # fft amplitude values
line.set_data(f_axis, amp_vals)
anim = animation.FuncAnimation(fig, animate, interval=100)
plt.show()
I didn’t work that much with pvsanal and f-signals in csound so I don’t know why i.e. f_axis_cs = fftArray[1::2] doesn’t give expected frequencies. @Rory maybe you know this? And also when I try to write f-signal in giTable directly using pvs2tab, I get this error:
error: Unable to find opcode entry for 'pvs2tab' with matching argument types:
Found: k pvs2tab if
kframe pvs2tab giTable ...
That is why I did a workaround by writing f-signal first to an array and then to a table.
Sorry, guy, I don’t really know Python, so fftArray[1::2] means what exactly? When I do this in C++. I get an array containing the amplitude of N frequency bins. then I simply draw them.
Hi,
I’m not good at python, and the example code by Lovre is very good starting point for me.
I slightly modified the code because the original code did not run enough fast for more complex sound. We can copy fsig to table with pvsftw and it works better.
Regarding the “fftArray[1::2]” in the original code, it is frequency output of phase vocoder so the data is different from f_axis(frequency bins).
Ehi,
thanks for sharing.
The only thing is that i feel it’s still slow.
Can’t we accelerate the renderin?
I tried messing with the “animate” parameter, but nothing changes!
The output of pvsanal is updated every gioverlap samples(i.e. sr/gioverlap=48000/256=5.33 msec). However, drawing with matplotlib is known to be slow.
I have no detailed knowledge, but I guess another library other than matplotlib is necessary for high performance drawing.
Hi,
I ported the code to PyQtGraph. In the attached code, the graph is updated every 5m second.
The performance is not very different from matplotlib port on my PC.