Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
526 views
in Technique[技术] by (71.8m points)

h.264 - How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter

Long Story:

  1. There is an H264/MPEG-4 Source
  2. I can able to connect this source with RTSP protocol.
  3. I can able to get raw UDP packets with RTP protocol.
  4. Then send those raw UDP packets to a Decoder[h264/mpeg-4] [DS Source Filter]
  5. But those "raw" UDP packets can not be decoded by the Decoder[h264/mpeg-4] filter

Shortly:

How do I process those raw UDP data in order to be decodable by H264/ MPEG-4 decoder filter? Can any one clearly identify steps I have to do with H264/MPEG stream?

Extra Info:

I am able to do this with FFmpeg... But I can not really figure out how FFmpeg processes the raw data so that is decodable by a decoder.

Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Peace of cake!

1. Get the data

As I can see, you already know how to do that (start RTSP session, SETUP a RTP/AVP/UDP;unicast; transport, and get user datagrams)... but if you are in doubt, ask.

No matter the transport (UDP or TCP) the data format is mainly the same:

  • RTP data: [RTP Header - 12bytes][Video data]
  • UDP: [RTP Data]
  • TCP: [$ - 1byte][Transport Channel - 1byte][RTP data length - 2bytes][RTP data]

So to get data from UDP, you only have to strip off first 12 bytes which represent RTP header. But beware, you need it to get video timing information, and for MPEG4 the packetization information!

For TCP you need to read first byte until you get byte $. Then read next byte, that will be transport channel that the following data belongs (when server responds on SETUP request it says: Transport: RTP/AVP/TCP;unicast;interleaved=0-1 this means that VIDEO DATA will have TRANSPORT_CHANNEL=0 and VIDEO RTCP DATA will have TRANSPORT_CHANNEL=1). You want to get VIDEO DATA, so we expect 0... then read one short (2 byte) that represents the length of the RTP data that follows, so read that much bytes, and now do the same as for UDP.

2. Depacketize data

H264 and MPEG4 data are usually packetized (in SDP there is packetization-mode parameter that can have values 0, 1 and 2 what each of them means, and how to depacketize it, you can see HERE) because there is a certain network limit that one endpoint can send through TCP or UDP that is called MTU. It is usually 1500 bytes or less. So if the video frame is larger than that (and it usually is), it needs to be fragmented (packetized) into MTU sized fragments. This can be done by encoder/streamer on TCP and UDP transport, or you can relay on IP to fragment and reassemble video frame on the other side... the first is much better if you want to have a smooth error prone video over UDP and TCP.

H264: To check does the RTP data (which arrived over UDP, or interleaved over TCP) hold fragment of one larger H264 video frame, you must know how the fragment looks when it is packetized:

H264 FRAGMENT

First byte:  [ 3 NAL UNIT BITS | 5 FRAGMENT TYPE BITS] 
Second byte: [ START BIT | END BIT | RESERVED BIT | 5 NAL UNIT BITS] 
Other bytes: [... VIDEO FRAGMENT DATA...]

Now, get the first VIDEO DATA in byte array called Data and get the following info:

int fragment_type = Data[0] & 0x1F;
int nal_type = Data[1] & 0x1F;
int start_bit = Data[1] & 0x80;
int end_bit = Data[1] & 0x40;

If fragment_type == 28 then video data following it represents the video frame fragment. Next check is start_bit set, if it is, then that fragment is the first one in a sequence. You use it to reconstruct IDR's NAL byte by taking the first 3 bits from first payload byte (3 NAL UNIT BITS) and combine them with last 5 bits from second payload byte (5 NAL UNIT BITS) so you would get a byte like this [3 NAL UNIT BITS | 5 NAL UNIT BITS]. Then write that NAL byte first into a clear buffer with VIDEO FRAGMENT DATA from that fragment.

If start_bit and end_bit are 0 then just write the VIDEO FRAGMENT DATA (skipping first two payload bytes that identify the fragment) to the buffer.

If start_bit is 0 and end_bit is 1, that means that it is the last fragment, and you just write its VIDEO FRAGMENT DATA (skipping the first two bytes that identify the fragment) to the buffer, and now you have your video frame reconstructed!

Bare in mind that the RTP data holds RTP header in first 12 bytes, and that if the frame is fragmented, you never write first two bytes in the defragmentation buffer, and that you need to reconstruct NAL byte and write it first. If you mess something up here, the picture will be partial (half of it will be gray or black or you will see artifacts).

MPEG4: This is an easy one. You need to check the MARKER_BIT in RTP Header. That byte is set (1) if the video data represents the whole video frame, and it is 0 of the video data is one video frame fragment. So to depacketize that, you need to see what the MARKER_BIT is. If it is 1 thats it, just read the video data bytes.

WHOLE FRAME:

   [MARKER = 1]

PACKETIZED FRAME:

   [MARKER = 0], [MARKER = 0], [MARKER = 0], [MARKER = 1]

First packet that has MARKER_BIT=0 is the first video frame fragment, all others that follow including the first one with MARKER_BIT=1 are fragments of the same video frame. So what you need to do is:

  • Until MARKER_BIT=0 place VIDEO DATA in depacketization buffer
  • Place next VIDEO DATA where MARKER_BIT=1 into the same buffer
  • Depacketization buffer now holds one whole MPEG4 frame

3. Process data for decoder (NAL byte stream)

When you have depacketized video frames, you need to make NAL byte stream. It has the following format:

  • H264: 0x000001[SPS], 0x000001[PPS], 0x000001[VIDEO FRAME], 0x000001...
  • MPEG4: 0x000001[Visual Object Sequence Start], 0x000001[VIDEO FRAME]

RULES:

  • Every frame MUST be prepended with 0x000001 3 byte code no matter the codec
  • Every stream MUST start with CONFIGURATION INFO, for H264 that are SPS and PPS frames in that order (sprop-parameter-sets in SDP), and for MPEG4 the VOS frame (config parameter in SDP)

So you need to build a config buffer for H264 and MPEG4 prepended with 3 bytes 0x000001, send it first, and then prepend each depacketized video frame with the same 3 bytes and send that to the decoder.

If you need any clarifying just comment... :)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...