Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
252 views
in Technique[技术] by (71.8m points)

ios - why is audio coming up garbled when using AVAssetReader with audio queue

based on my research.. people keep on saying that it's based on mismatched/wrong formatting.. but i'm using lPCM formatting for both input and output.. how can you go wrong with that? the result i'm getting is just noise.. (like white noise)

I've decided to just paste my entire code.. perhaps that would help:

#import "AppDelegate.h"
#import "ViewController.h"

@implementation AppDelegate

@synthesize window = _window;
@synthesize viewController = _viewController;


- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
    self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
    // Override point for customization after application launch.
    self.viewController = [[ViewController alloc] initWithNibName:@"ViewController" bundle:nil];
    self.window.rootViewController = self.viewController;
    [self.window makeKeyAndVisible];
    // Insert code here to initialize your application

    player = [[Player alloc] init];


    [self setupReader];
    [self setupQueue];


    // initialize reader in a new thread    
    internalThread =[[NSThread alloc]
                     initWithTarget:self
                     selector:@selector(readPackets)
                     object:nil];

    [internalThread start];


    // start the queue. this function returns immedatly and begins
    // invoking the callback, as needed, asynchronously.
    //CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");

    // and wait
    printf("Playing...
");
    do
    {
        CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, false);
    } while (!player.isDone /*|| gIsRunning*/);

    // isDone represents the state of the Audio File enqueuing. This does not mean the
    // Audio Queue is actually done playing yet. Since we have 3 half-second buffers in-flight
    // run for continue to run for a short additional time so they can be processed
    CFRunLoopRunInMode(kCFRunLoopDefaultMode, 2, false);

    // end playback
    player.isDone = true;
    CheckError(AudioQueueStop(queue, TRUE), "AudioQueueStop failed");

cleanup:
    AudioQueueDispose(queue, TRUE);
    AudioFileClose(player.playbackFile);

    return YES;

}


- (void) setupReader 
{
    NSURL *assetURL = [NSURL URLWithString:@"ipod-library://item/item.m4a?id=1053020204400037178"];   // from ilham's ipod
    AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];

    // from AVAssetReader Class Reference: 
    // AVAssetReader is not intended for use with real-time sources,
    // and its performance is not guaranteed for real-time operations.
    NSError * error = nil;
    AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];

    AVAssetTrack* track = [songAsset.tracks objectAtIndex:0];       
    readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
                                                              outputSettings:nil];

    //    AVAssetReaderOutput* readerOutput = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:songAsset.tracks audioSettings:nil];

    [reader addOutput:readerOutput];
    [reader startReading];   


}

- (void) setupQueue
{

    // get the audio data format from the file
    // we know that it is PCM.. since it's converted    
    AudioStreamBasicDescription dataFormat;
    dataFormat.mSampleRate = 44100.0;
    dataFormat.mFormatID = kAudioFormatLinearPCM;
    dataFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    dataFormat.mBytesPerPacket = 4;
    dataFormat.mFramesPerPacket = 1;
    dataFormat.mBytesPerFrame = 4;
    dataFormat.mChannelsPerFrame = 2;
    dataFormat.mBitsPerChannel = 16;


    // create a output (playback) queue
    CheckError(AudioQueueNewOutput(&dataFormat, // ASBD
                                   MyAQOutputCallback, // Callback
                                   (__bridge void *)self, // user data
                                   NULL, // run loop
                                   NULL, // run loop mode
                                   0, // flags (always 0)
                                   &queue), // output: reference to AudioQueue object
               "AudioQueueNewOutput failed");


    // adjust buffer size to represent about a half second (0.5) of audio based on this format
    CalculateBytesForTime(dataFormat,  0.5, &bufferByteSize, &player->numPacketsToRead);

    // check if we are dealing with a VBR file. ASBDs for VBR files always have 
    // mBytesPerPacket and mFramesPerPacket as 0 since they can fluctuate at any time.
    // If we are dealing with a VBR file, we allocate memory to hold the packet descriptions
    bool isFormatVBR = (dataFormat.mBytesPerPacket == 0 || dataFormat.mFramesPerPacket == 0);
    if (isFormatVBR)
        player.packetDescs = (AudioStreamPacketDescription*)malloc(sizeof(AudioStreamPacketDescription) * player.numPacketsToRead);
    else
        player.packetDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)

    // get magic cookie from file and set on queue
    MyCopyEncoderCookieToQueue(player.playbackFile, queue);

    // allocate the buffers and prime the queue with some data before starting
    player.isDone = false;
    player.packetPosition = 0;
    int i;
    for (i = 0; i < kNumberPlaybackBuffers; ++i)
    {
        CheckError(AudioQueueAllocateBuffer(queue, bufferByteSize, &audioQueueBuffers[i]), "AudioQueueAllocateBuffer failed");    

        // EOF (the entire file's contents fit in the buffers)
        if (player.isDone)
            break;
    }   
}


-(void)readPackets
{

    // initialize a mutex and condition so that we can block on buffers in use.
    pthread_mutex_init(&queueBuffersMutex, NULL);
    pthread_cond_init(&queueBufferReadyCondition, NULL);

    state = AS_BUFFERING;


    while ((sample = [readerOutput copyNextSampleBuffer])) {

        AudioBufferList audioBufferList;
        CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample ); 

        CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                                                                           sample,
                                                                           NULL,
                                                                           &audioBufferList,
                                                                           sizeof(audioBufferList),
                                                                           NULL,
                                                                           NULL,
                                                                           kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                                           &CMBuffer
                                                                           ),
                   "could not read samples");

        AudioBuffer audioBuffer = audioBufferList.mBuffers[0];

        UInt32 inNumberBytes = audioBuffer.mDataByteSize;
        size_t incomingDataOffset = 0;

        while (inNumberBytes) {
            size_t bufSpaceRemaining;
            bufSpaceRemaining = bufferByteSize - bytesFilled;

            @synchronized(self)
            {
                bufSpaceRemaining = bufferByteSize - bytesFilled;
                size_t copySize;    

                if (bufSpaceRemaining < inNumberBytes)
                {
                    copySize = bufSpaceRemaining;             
                }
                else 
                {
                    copySize = inNumberBytes;
                }

                // copy data to the audio queue buffer
                AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
                memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(audioBuffer.mData + incomingDataOffset), copySize); 

                // keep track of bytes filled
                bytesFilled +=copySize;
                incomingDataOffset +=copySize;
                inNumberBytes -=copySize;      
            }

            // if the space remaining in the buffer is not enough for this packet, then enqueue the buffer.
            if (bufSpaceRemaining < inNumberBytes + bytesFilled)
            {
                [self enqueueBuffer];
            }

        }
    }




}

-(void)enqueueBuffer 
{
    @synchronized(self)
    {

        inuse[fillBufferIndex] = true;      // set in use flag
        buffersUsed++;

        // enqueue buffer
        AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
        NSLog(@"we are now enqueing buffer %d",fillBufferIndex);
        fillBuf->mAudioDataByteSize = bytesFilled;

        err = AudioQueueEnqueueBuffer(queue, fillBuf, 0, NULL);

        if (err)
        {
            NSLog(@"could not enqueue queue with buffer");
            return;
        }


        if (state == AS_BUFFERING)
        {
            //
            // Fill all the buffers before starting. This ensures that the
            // AudioFileStream stays a small amount ahead of the AudioQueue to
            // avoid an audio glitch playing streaming files on iPhone SDKs < 3.0
            //
            if (buffersUsed == kNumberPlaybackBuffers - 1)
            {

                err = AudioQueueStart(queue, NULL);
                if (err)
                {
                    NSLog(@"couldn't start queue");
                    return;
                }
                state = AS_PLAYING;
            }
        }

        // go to next buffer
        if (++fillBufferIndex >= kNumberPlaybackBuffers) fillBufferIndex = 0;
        bytesFilled = 0;        // reset bytes filled

    }

    // wait until next buffer is not in use
    pthread_mutex_lock(&queueBuffersMutex); 
    while (inuse[fillBufferIndex])
    {
        pthread_cond_wait(&queueBufferReadyCondition, &queueBuffersMutex);
    }
    pthread_mutex_unlock(&queueBuffersMutex);


}


#pragma mark - utility functions -

// generic error handler - if err is nonzero, prints error message and exits program.
static void CheckError(OSStatus error, const char *operation)
{
    if (error == noErr) return;

    char str[20];
    // see if it appears to be a 4-char-code
    *(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error);
    if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
        str[0] = str[5] = ''';
        str[6] = '';
    } else
        // no, format it as an integer
        sprintf(str, "%d", (int)error);

    fprintf(stderr, "Error: %s (%s)
", operation, str);

    exit(1);
}

// we only use time here as a guideline
// we're really trying to get somewhere between 16K and 64K buffers, but not allocate too much if we don't need it/*
void CalculateBytesForTime(AudioStreamBasicDescription inDesc, Float64 inSeconds, UInt32 *outBufferSize, UInt32 *outNumPackets)
{

    // we need to calculate how many packets we read at a time, and how big a buffer we need.
    // we base this on the size of the packets in the file and an approximate duration for each buffer.
    //
    // first check to see what the max size of a packet is, if it is bigger than our default
    // allocation size, that needs to become larger

    // we don't have access to file packet size, so we just default it to maxBufferSize
    UInt32 maxPacketSize = 0x10000;

    static const int maxBuff

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

So here's what I think is happening and also how I think you can fix it.

You're pulling a predefined item out of the ipod (music) library on an iOS device. you are then using an asset reader to collect it's buffers, and queue those buffers, where possible, in an AudioQueue.

The problem you are having, I think, is that you are setting the audio queue buffer's input format to Linear Pulse Code Modulation (LPCM - hope I got that right, I might be off on the acronym). The output settings you are passing to the asset reader output are nil, which means that you'll get an output that is most likely NOT LPCM, but is instead aiff or aac or mp3 or whatever the format is of the song as it exists in iOS's media library. You can, however, remedy this situation by passing in different output settings.

Try changing

readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:nil];

to:

[NSDictionary dictionaryWithObjectsAndKeys:
                                                 [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, 
                                                 [NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                                                 [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                                                 [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
                                                 AVChannelLayoutKey,
                                                 [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                                                 [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                                                 [NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
                                                 [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
                                                 nil];

output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track audioSettings:outputSettings];

It's my understanding (per the documentation at Apple1) that passing nil as the output settings param gives you samples of the same file type as the original audio track. Even if you have a file that is LPCM, some other settings might be off, which might cause your problems. At the very least, this will normalize all the reader output, which should make things a bit easier to trouble shoot.

Hope that helps!

Edit:

the reason why I provided nul as a parameter for AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:audioReadSettings];

was because according to the documentation and trial and error, I...

AVAssetReaders do 2 things; read back an audio file as it exists on disk (i.e.: mp3, aac, aiff), or convert the audio into lpcm.

If you pass nil as the output settings, it will read the file back as it exists, and in this you are correct. I apologize for not mentioning that an asset reader will only allow nil or LPCM. I actually ran into that problem myself (it's in the docs somewhere, but requires a bit of diving), but didn't elect to mention it here as it wasn't on my mind at the time. Sooooo... sorry about that?

If you want to know the AudioStreamBasicDescription (ASBD) of the track you are reading before you read it, you can get it by doing this:

AVURLAsset* uasset = [[AVURLAsset URLAssetWithURL:<#assetURL#> options:nil]retain];
AVAssetTrack*track = [uasset.tracks objectAtIndex:0];
CMFormatDescriptionRef formDesc = (CMFormatDescriptionRef)[[track formatDescriptions] objectAtIndex:0];
const AudioStreamBasicDescription* asbdPointer = CMAudioFormatDescriptionGetStreamBasicDescription(formDesc);
//because this is a pointer and not a struct we need to move the data into a struct so we can use it
AudioStreamBasicDescription asbd = {0};
memcpy(&asbd, asbdPointer, sizeof(asbd));
    //asbd now contains a basic description for the track

You can then convert asbd to binary data in whatever format you see fit and transfer it over the network. You should then be able to start sending audio buffer data over the network and successfully play it back with your AudioQueue.

I actually had a system like this working not that long ago, but since I could't keep the connection alive when the iOS client device went to the background, I wasn't able to use it for my purpose. Still, if all that work lets me help someone else who can actually use the info, seems like a win to me.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...