There are two problems. The main one is that Safari on iOS 11 seems to automatically suspend new AudioContext
's that aren't created in response to a tap. You can resume()
them, but only in response to a tap.
(Update: Chrome mobile also does this, and Chrome desktop will have the same limitation starting in version 70 / December 2018.)
So, you have to either create it before you get the MediaStream
, or else get the user to tap again later.
The other issue with your code is that AudioContext
is prefixed as webkitAudioContext
in Safari.
Here's a working version:
<html>
<body>
<button onclick="beginAudioCapture()">Begin Audio Capture</button>
<script>
function beginAudioCapture() {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var processor = context.createScriptProcessor(1024, 1, 1);
processor.connect(context.destination);
var handleSuccess = function (stream) {
var input = context.createMediaStreamSource(stream);
input.connect(processor);
var recievedAudio = false;
processor.onaudioprocess = function (e) {
// This will be called multiple times per second.
// The audio data will be in e.inputBuffer
if (!recievedAudio) {
recievedAudio = true;
console.log('got audio', e);
}
};
};
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(handleSuccess);
}
</script>
</body>
</html>
(You can set the onaudioprocess
callback sooner, but then you get empty buffers until the user approves of microphone access.)
Oh, and one other iOS bug to watch out for: the Safari on iPod touch (as of iOS 12.1.1) reports that it does not have a microphone (it does). So, getUserMedia will incorrectly reject with an Error: Invalid constraint
if you ask for audio there.
FYI: I maintain the microphone-stream package on npm that does this for you and provides the audio in a Node.js-style ReadableStream. It includes this fix, if you or anyone else would prefer to use that over the raw code.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…