I am trying to perform cont. speech recognition using AVCapture
on iOS 10 beta. I have setup captureOutput(...)
to continuously get CMSampleBuffers
. I put these buffers directly into SFSpeechAudioBufferRecognitionRequest
which I set up previously like this:
... do some setup
SFSpeechRecognizer.requestAuthorization { authStatus in
if authStatus == SFSpeechRecognizerAuthorizationStatus.authorized {
self.m_recognizer = SFSpeechRecognizer()
self.m_recognRequest = SFSpeechAudioBufferRecognitionRequest()
self.m_recognRequest?.shouldReportPartialResults = false
self.m_isRecording = true
} else {
print("not authorized")
}
}
.... do further setup
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
if(!m_AV_initialized) {
print("captureOutput(...): not initialized !")
return
}
if(!m_isRecording) {
return
}
let formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer)
let mediaType = CMFormatDescriptionGetMediaType(formatDesc!)
if (mediaType == kCMMediaType_Audio) {
// process audio here
m_recognRequest?.appendAudioSampleBuffer(sampleBuffer)
}
return
}
The whole things works for a few seconds. Then captureOutput is not called anymore. If I comment out the line appendAudioSampleBuffer(sampleBuffer) then the captureOutput is called as long as the app runs (as expected). Obviously putting the sample buffers into the speech recognition engine somehow blocks further execution. I guess that the available Buffers are consumed after some time and the process stops somehow because it can't get anymore buffers ???
I should mention that everything that is recorded during the first 2 seconds leads to correct recognitions. I just don't know how exactly the SFSpeech API is working since Apple did not put any text into the beta docs. BTW: How to use SFSpeechAudioBufferRecognitionRequest.endAudio() ?
Anybody knows something here ?
Thanks
Chris
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…