Spectrogram csound output buffer

Hello there,

I’m looking for suggestions to make a live spectrogram visualisation of my output buffer in a csound API.
Do you have some ideas?
I think ffmpeg could do it, but probably I’d have to pass through a routing dac (i.e. jack, soundflower, blackhole…) and I want to read directly my output buffer from the csound API.
Do you know if I could pass directly to ffmpeg a buffer?
Thank u

I thought you were already using Python, could you not use that? Seems like the least complex option.

yup… totally, but looks like it’s not so easy to draw videos in python… :face_holding_back_tears::face_holding_back_tears::face_holding_back_tears:

Why do you need videos? Why not simply draw the result of your FFT analysis X frames a second?

You could use openCV and its video writer

Totally, I just want to find out the most efficient way.

Do u have any exemple?
The difficulty for me is convert the csound outputBuffer() to a stream a library can read.

Unfortunately I don’t have an example of converting a outputBuffer() to a openCV matrix type but this would be the way I would do it if I needed to save videos of the data that I’m plotting. But if you are ok with just plotting (animating) stuff without saving it in the video file, than as @rory mentioned, you could directly draw output of outputBuffer() on i.e. matplotlib graph.

You can write the FFT data to a table using pvs2tab, then use csoundTableCopyOut to copy the frequency data to your frontend. Then draw it. No need to mess with output buffers, or implement your own FFT routines.

thank you.
I’m trying smthg simple like:

import ctcsound

import numpy as np

orc_text = '''

		instr 1

	aout	oscili 1000, 500
	aout	*= linseg(0, 1, 1, p3-1, 0)
			outall aout

	karr[] init 1026

	fsig	pvsanal aout, 1024, 256, 1024, 1
	kframe	pvs2tab karr, fsig

		endin'''

sco_text = "i1 0 5"

cs = ctcsound.Csound()
result = cs.setOption("-d")
result = cs.setOption("-odac")
result = cs.compileOrc(orc_text)
result = cs.readScore(sco_text)
result = cs.start()
pyfft = np.empty((1026))

#print(pyfft)
while True:
	result = cs.performKsmps()
	cs.tableCopyOut('karr', pyfft)
	if result != 0:
		break
result = cs.cleanup()
cs.reset()
del cs

But i get:

ctypes.ArgumentError: argument 2: TypeError: wrong type

How can I get the pointer to an array declared in csound? (if the name of the variable doesnt count)

I’ve never done this with python, but your approach looks similar to what I’ve done countless times in other languages.

You can do something like this:

import ctcsound as csound
from matplotlib import pyplot as plt
from matplotlib import animation
import numpy as np

fftSize = 1024 # this needs to match gifftsiz in orc
fs = 48000 # sample rate

orc = '''
0dbfs = 1

;general values for fourier transform
gifftsiz  =         1024   ; this needs to match above defined fftSize
gioverlap =         256
giwintyp  =         1 ;von hann window

;an 'empty' function table
giTable ftgen   1, 0, -(gifftsiz+2), 2, 0

instr 1
    aout	oscili 0.5, 500
	aout	*= linseg(0, 1, 1, p3-1, 0)

    out aout

    kArr[] init gifftsiz+2 ;create array for bin data

    fsig	pvsanal aout, gifftsiz, gioverlap, gifftsiz, giwintyp
    kflag pvs2array kArr, fsig ;export data to array
	copya2ftab kArr, giTable
endin
'''
sco = "i1 0 50\n"

cs = csound.Csound()
cs.setOption('-odac')
cs.compileOrc(orc)
cs.readScore(sco)
cs.start()

pt = csound.CsoundPerformanceThread(cs.csound())
pt.play()

fig, ax = plt.subplots()
ax.set(xlim=(0,fs/2), ylim=(0,1))
line, = ax.plot([], [], lw=2)

f_axis_delta = fs/fftSize
f_axis = np.arange(0,fs/2 + f_axis_delta,f_axis_delta)
fftArray = np.zeros(fftSize + 2) # array length must be fftSize+2

def animate(i, f_axis_cs=[], amp_vals=[]):
    cs.tableCopyOut(1,fftArray)
    #f_axis_cs = fftArray[1::2] # theoretically this should be the same as f_axis above defined but it's not
    amp_vals = fftArray[0::2] # fft amplitude values
    line.set_data(f_axis, amp_vals)


anim = animation.FuncAnimation(fig, animate, interval=100)
plt.show()

I didn’t work that much with pvsanal and f-signals in csound so I don’t know why i.e. f_axis_cs = fftArray[1::2] doesn’t give expected frequencies. @Rory maybe you know this? And also when I try to write f-signal in giTable directly using pvs2tab, I get this error:

error:  Unable to find opcode entry for 'pvs2tab' with matching argument types:
Found: k pvs2tab if
       kframe pvs2tab giTable ...

That is why I did a workaround by writing f-signal first to an array and then to a table.

Sorry, guy, I don’t really know Python, so fftArray[1::2] means what exactly? When I do this in C++. I get an array containing the amplitude of N frequency bins. then I simply draw them.

OH, to me it works!!! R u sure u tested with python3?
Thank you fellow !! Now I’ve got some basis to work!!
I’ll post the result!

Hi,
I’m not good at python, and the example code by Lovre is very good starting point for me.

I slightly modified the code because the original code did not run enough fast for more complex sound. We can copy fsig to table with pvsftw and it works better.

Regarding the “fftArray[1::2]” in the original code, it is frequency output of phase vocoder so the data is different from f_axis(frequency bins).

pyplot_fsig.py.zip (1.1 KB)

Ehi,
thanks for sharing.
The only thing is that i feel it’s still slow.
Can’t we accelerate the renderin?
I tried messing with the “animate” parameter, but nothing changes!

The output of pvsanal is updated every gioverlap samples(i.e. sr/gioverlap=48000/256=5.33 msec). However, drawing with matplotlib is known to be slow.
I have no detailed knowledge, but I guess another library other than matplotlib is necessary for high performance drawing.

Do you have any idea about some library I could look for this purpose ?

This is a nice and performant plotting library: https://www.pyqtgraph.org

Hi,
I ported the code to PyQtGraph. In the attached code, the graph is updated every 5m second.
The performance is not very different from matplotlib port on my PC.

PyQtGraph_fsig.py.zip (1.8 KB)