That data is interleaved IEEE float, so it alternates channel data as you step through the array, and the data range for each sample is from -1 to 1.
For example, a mono signal only has one channel, so it won't interleave data at all; but a stereo signal has two channels of audio, and so:
dataInFloat[0]
is the first sample of data from the left channel and
dataInFloat[1]
is the first sample of data from the right channel. Then,
dataInFloat[2]
is the second sample of data from the left channel. and they just keep going back and forth. All the other data you'll end up caring about is in windows.media.mediaproperties.audioencodingproperties
So, just knowing this, you (essentially) can immediately get the overall volume of the signal directly from this data by looking at the absolute value of each sample. You'll definitely want to average it out over some amount of time. You can even just attach EQ effects to different nodes, and make seperate Low, Mids, and High analyzer nodes and never even get into FFT stuff. BUT WHAT FUN IS THAT? (it's actually still fun)
And then, yeah, to get your complex harmonic data and make a truly sweet visualizer, you want to do an FFT on it. People enjoy using AForge for learning scenarios, like yours. See Sources/Imaging/ComplexImage.cs for usage, Sources/Math/FourierTransform.cs for implemenation
Then you can easily get your classic bin data and do the classic music visualizer stuff or get more creative or whatever! technology is awesome!
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…