Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
400 views
in Technique[技术] by (71.8m points)

ios - Metal - Resize video buffer before passing to custom Kernel filter

Within our iOS app, we are using custom filters using Metal (CIKernel/CIColorKernel wrappers).

Let's assume we have a 4K video and a custom video composition with a 1080p output size, that applies an advanced filter on the video buffers.
Obviously, we don't need to filter the video in its original size, doing so we'll probably terminate the app with a memory warning (true story).

This is the video-filtering pipeline:

Getting the buffer in 4K (as CIImage) -->
Apply filter on the CIImage -->
the filter applies the CIKernel Metal filter function on the CIImage-->
Return the filtered CIImage to the composition

The only two places I can think of applying the resize is before we send it into the filter process or within the Metal function.

public class VHSFilter: CIFilter {

    public override var outputImage: CIImage? {
        // InputImage size is 4K
        guard let inputImage = self.inputImage else { return nil }

        // Manipulate the image here

        let roiCallback: CIKernelROICallback = { _, rect -> CGRect in
            return inputImage.extent
        }


        // Or inside the Kernel Metal function
        let outputImage = self.kernel.apply(extent: inputExtent,
                                            roiCallback: roiCallback,
                                            arguments: [inputImage])

        return outputImage

    }
}

I'm sure I'm not the first one to encounter this issue

What does one do when the incoming video-buffer are too large (memory-wise) to filter, and they need to resize on-the-fly efficiently? Without re-encoding the video before?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

As warrenm says, you could use a CILanczosScaleTransform filter to downsample the video frames before processing. However, this would still cause AVFoundation to allocate buffers in full resolution.

I assume you use a AVMutableVideoComposition to do the filtering? In this case you can just set the renderSize of the composition to the target size. From the docs:

The size at which the video composition should render.

This will tell AVFoundation to resample the frames (efficiently, fast) before handing them to your filter pipeline.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...