I'm having an issue with downsampling audio taken from the microphone. I'm using AVAudioEngine to take samples from the microphone with the following code:
assert(self.engine.inputNode != nil)
let input = self.engine.inputNode!
let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false)
let mixer = AVAudioMixerNode()
engine.attach(mixer)
engine.connect(input, to: mixer, format: input.inputFormat(forBus: 0))
do {
try engine.start()
mixer.installTap(onBus: 0, bufferSize: 1024, format: audioFormat, block: {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
//some code here
})
} catch let error {
print(error.localizedDescription)
}
This code works great on the iPhone 5s since the microphone input is 8000Hz, and the buffer gets filled with data from the microphone.
The problem is that I want to be able to record from iPhone 6s (and upwards) which microphone records with 16000Hz. And whats weird is that if I connect the mixernode with the engines mainmixernode (with the following code):
engine.connect(mixer, to: mainMixer, format: audioFormat)
this actually works, and the buffer I get has the format of 8000Hz and the sound comes out perfectly downsampled, only problem is that the sound also comes out from the speaker which I don't want (and if I don't connect it the buffer is empty).
Does anyone know how to resolve this issue?
Any help, input or thought is very much appreciated.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…