Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
182 views
in Technique[技术] by (71.8m points)

c++ - ffmpeg audio frame from directshow sampleCB imediasample

i use isamplegrabber sampleCB callback to get audio sample, i can get buffer and buffer length from imediasample and i use avcodec_fill_audio_frame(frame,ost->enc->channels,ost->enc->sample_fmt,(uint8_t *)buffer,length,0) to make an avframe , but this frame does not make any audio in my mux file! i think the length is very smaller than frame_size. can every one help me please? or give me some example if it is possible. thank you

this is my samplecb code :

     HRESULT AudioSampleGrabberCallBack::SampleCB(double Time, IMediaSample*pSample){   
        BYTE *pBuffer;
        pSample->GetPointer(&pBuffer);
        long BufferLen = pSample->GetActualDataLength();        
        muxer->PutAudioFrame(pBuffer,BufferLen);    
}

and this is samplegrabber pin media type :

    AM_MEDIA_TYPE pmt2;
    ZeroMemory(&pmt2, sizeof(AM_MEDIA_TYPE));
    pmt2.majortype = MEDIATYPE_Audio;
    pmt2.subtype = FOURCCMap(0x1602);
    pmt2.formattype = FORMAT_WaveFormatEx;
    hr = pSampleGrabber_audio->SetMediaType(&pmt2);

after that i using ffmpeg muxing example to process frames and i think i need only to change the signal generating part of code :

AVFrame *Muxing::get_audio_frame(OutputStream *ost,BYTE* buffer,long length)
{
    AVFrame *frame = ost->tmp_frame;
    int j, i, v;
    uint16_t *q = (uint16_t*)frame->data[0];

    int buffer_size = av_samples_get_buffer_size(NULL, ost->enc->channels,
                                                 ost->enc->frame_size,
                                                 ost->enc->sample_fmt, 0);
//    uint8_t *sample = (uint8_t *) av_malloc(buffer_size);
    av_samples_alloc(&frame->data[0], frame->linesize, ost->enc->channels, ost->enc->frame_size, ost->enc->sample_fmt, 1);
    avcodec_fill_audio_frame(frame, ost->enc->channels, ost->enc->sample_fmt,frame->data[0], buffer_size, 1);

    frame->pts = ost->next_pts;
    ost->next_pts  += frame->nb_samples;

    return frame;
}
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The code snippets suggest you are getting AAC data using Sample Grabber and you are trying to write that into file using FFmpeg's libavformat. This can work out.

You initialize your sample grabber to get audio data in WAVE_FORMAT_AAC_LATM format. This format is not so wide spread and you are interested in reviewing your filter graph to make sure the upstream connection on the Sample Grabber is such that you expect. There is a chance that somehow there is a weird chain of filter that pretend to produce AAC-LATM and the reality is that the data is invalid (or not even reaching grabber callback). So you need to review the filter graph (see Loading a Graph From an External Process and Understanding Your DirectShow Filter Graph), then step through your callback with debugger to make sure you get the data and it makes sense.

Next thing, you are expected to initialize AVFormatContext, AVStream to indicate that you will be writing data in AAC LATM format. Provided code does not show you are doing it right. The sample you are referring to is using default codecs.

Then, you need to make sure that both incoming data and your FFmpeg output setup are in agreement about whether the data has or does not have ADTS headers, the provided code does not shed any light on this.

Furthermore, I am afraid you might be preparing your audio data incorrectly. The sample in question generates raw audio data and applies encoder to produce compressed content using avcodec_encode_audio2. Then a packed with compressed audio is being sent to writing using av_interleaved_write_frame. The way you attached your code snippets to the question makes me thing you are doing it wrong. For starters, you still don't show relevant code which makes me think you have troubles identifying what code is relevant exactly. Then you are dealing with your AAC data as if it was raw PCM audio in get_audio_frame code snippet whereas you are interested in reviewing FFmpeg sample code with the thought in mind that you already have compressed AAC data and sample gets to thins point after return from avcodec_encode_audio2 call. This is where you are supposed to merge your code and the sample.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...