Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
249 views
in Technique[技术] by (71.8m points)

android - Rotating YUV image data for Portrait Mode Using RenderScript

for a video image processing project, I have to rotate the incoming YUV image data so that the data is not shown horizontally but vertically. I used this project which gave me a tremendous insight in how to convert YUV image data to ARGB for processing them in real-time. The only drawback of that project is that it is only in landscape. There is no option for portrait mode (I do not know why the folks at Google present an example sample which handles only the landscape orientation). I wanted to change that.

So, I decided to use a custom YUV to RGB script which rotates the data so that it appears in portrait mode. The following GIF demonstrates how the app shows the data BEFORE I apply any rotation.

enter image description here

You must know that in Android, the YUV image data is presented in the landscape orientation even if the device is in portrait mode (I did NOT know it before I started this project. Again, I do not understand why there is no method available that can be used to rotate the frames with one call). That means that the starting point is at the bottom-left corner even if the device is in portrait mode. But in portrait mode, the starting point of each frame should be at the top-left corner. I use a matrix notation for the fields (e.g. (0,0), (0,1), etc.). Note: I took the sketch from here: enter image description here

To rotate the landscape-oriented frame, we have to reorganize the fields. Here are the mappings I made to the sketch (see above) which shows a single frame yuv_420 in landscape mode. The mappings should rotate the frame by 90 degrees:

first column starting from the bottom-left corner and going upwards:
(0,0) -> (0,5)       // (0,0) should be at (0,5)
(0,1) -> (1,5)       // (0,1) should be at (1,5)
(0,2) -> (2,5)       // and so on ..
(0,3) -> (3,5)
(0,4) -> (4,5)
(0,5) -> (5,5)

2nd column starting at (1,0) and going upwards:
(1,0) -> (0,4)
(1,1) -> (1,4)
(1,2) -> (2,4)
(1,3) -> (3,4)
(1,4) -> (4,4)
(1,5) -> (5,4)

and so on...

In fact, what happens is that the first column becomes the new first row, the 2nd column becomes the new 2nd row, and so on. As you can see from the mappings, we can make the following observations:

  • the x coordinate of the result is always equal to the y coordinate from the left side. So, we can say that x = y.
  • What we can always observe is that for the y coordinate of the result the following equation must hold: y = width - 1 - x. (I tested this for all coordinates from the sketch, it was always true).

So, I wrote the following renderscript kernel function:

#pragma version(1)
#pragma rs java_package_name(com.jon.condino.testing.renderscript)
#pragma rs_fp_relaxed

rs_allocation gCurrentFrame;
int width;

uchar4 __attribute__((kernel)) yuv2rgbFrames(uint32_t x,uint32_t y)
{

    uint32_t inX = y;             // 1st observation: set x=y
    uint32_t inY = width - 1 - x; // 2nd observation: the equation mentioned above

    // the remaining lines are just methods to retrieve the YUV pixel elements, converting them to RGB and outputting them as result 

    // Read in pixel values from latest frame - YUV color space
    // The functions rsGetElementAtYuv_uchar_? require API 18
    uchar4 curPixel;
    curPixel.r = rsGetElementAtYuv_uchar_Y(gCurrentFrame, inX, inY);
    curPixel.g = rsGetElementAtYuv_uchar_U(gCurrentFrame, inX, inY);
    curPixel.b = rsGetElementAtYuv_uchar_V(gCurrentFrame, inX, inY);

    // uchar4 rsYuvToRGBA_uchar4(uchar y, uchar u, uchar v);
    // This function uses the NTSC formulae to convert YUV to RBG
    uchar4 out = rsYuvToRGBA_uchar4(curPixel.r, curPixel.g, curPixel.b);

    return out;
}

The approach seems to work but it has a little bug as you can see in the following image. The camera preview is in portrait mode as we can see. BUT I have this very weird color lines at the left side of my camera preview. Why is this happening? (Note that I use back facing camera): enter image description here

Any advice for solving the problem would be great. I am dealing with this problem (rotation of YUV from landscape to portrait) since 2 weeks and this is by far the best solution I could get on my own. I hope someone can help to improve the code so that the weird color lines at the left side also disappears.

UPDATE:

My Allocations I made within the code are the following:

// yuvInAlloc will be the Allocation that will get the YUV image data
// from the camera
yuvInAlloc = createYuvIoInputAlloc(rs, x, y, ImageFormat.YUV_420_888);
yuvInAlloc.setOnBufferAvailableListener(this);

// here the createYuvIoInputAlloc() method
public Allocation createYuvIoInputAlloc(RenderScript rs, int x, int y, int yuvFormat) {
    return Allocation.createTyped(rs, createYuvType(rs, x, y, yuvFormat),
            Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
}

// the custom script will convert the YUV to RGBA and put it to this Allocation
rgbInAlloc = RsUtil.createRgbAlloc(rs, x, y);

// here the createRgbAlloc() method
public Allocation createRgbAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y));
}



// the allocation to which we put all the processed image data
rgbOutAlloc = RsUtil.createRgbIoOutputAlloc(rs, x, y);

// here the createRgbIoOutputAlloc() method
public Allocation createRgbIoOutputAlloc(RenderScript rs, int x, int y) {
    return Allocation.createTyped(rs, createType(rs, Element.RGBA_8888(rs), x, y),
                Allocation.USAGE_IO_OUTPUT | Allocation.USAGE_SCRIPT);
}

some other helper functions:

public Type createType(RenderScript rs, Element e, int x, int y) {
        if (Build.VERSION.SDK_INT >= 21) {
            return Type.createXY(rs, e, x, y);
        } else {
            return new Type.Builder(rs, e).setX(x).setY(y).create();
        }
    }

    @RequiresApi(18)
    public Type createYuvType(RenderScript rs, int x, int y, int yuvFormat) {
        boolean supported = yuvFormat == ImageFormat.NV21 || yuvFormat == ImageFormat.YV12;
        if (Build.VERSION.SDK_INT >= 19) {
            supported |= yuvFormat == ImageFormat.YUV_420_888;
        }
        if (!supported) {
            throw new IllegalArgumentException("invalid yuv format: " + yuvFormat);
        }
        return new Type.Builder(rs, createYuvElement(rs)).setX(x).setY(y).setYuvFormat(yuvFormat)
                .create();
    }

    public Element createYuvElement(RenderScript rs) {
        if (Build.VERSION.SDK_INT >= 19) {
            return Element.YUV(rs);
        } else {
            return Element.createPixel(rs, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV);
        }
    }

calls on the custom renderscript and allocations :

// see below how the input size is determined
customYUVToRGBAConverter.invoke_setInputImageSize(x, y);
customYUVToRGBAConverter.set_inputAllocation(yuvInAlloc);

// receive some frames
yuvInAlloc.ioReceive();


// performs the conversion from the YUV to RGB
customYUVToRGBAConverter.forEach_convert(rgbInAlloc);

// this just do the frame manipulation , e.g. applying a particular filter
renderer.renderFrame(mRs, rgbInAlloc, rgbOutAlloc);


// send manipulated data to output stream
rgbOutAlloc.ioSend();

Last but least, the size of the input image. The x and y coordinates of the methods you have seen above are based on the preview size denoted here as mPreviewSize:

int deviceOrientation = getWindowManager().getDefaultDisplay().getRotation();
int totalRotation = sensorToDeviceRotation(cameraCharacteristics, deviceOrientation);
// determine if we are in portrait mode
boolean swapRotation = totalRotation == 90 || totalRotation == 270;
int rotatedWidth = width;
int rotatedHeigth = height;

// are we in portrait mode? If yes, then swap the values
if(swapRotation){
      rotatedWidth = height;
      rotatedHeigth = width;
}

// determine the preview size 
mPreviewSize = chooseOptimalSize(
                  map.getOutputSizes(SurfaceTexture.class),
                  rotatedWidth,
                  rotatedHeigth);

So, based on that the x would be mPreviewSize.getWidth() and y would be mPreviewSize.getHeight() in my case.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

See my YuvConverter. It was inspired by android - Renderscript to convert NV12 yuv to RGB.

Its rs part is very simple:

#pragma version(1)
#pragma rs java_package_name(whatever)
#pragma rs_fp_relaxed

rs_allocation Yplane;
uint32_t Yline;
uint32_t UVline;
rs_allocation Uplane;
rs_allocation Vplane;
rs_allocation NV21;
uint32_t Width;
uint32_t Height;

uchar4 __attribute__((kernel)) YUV420toRGB(uint32_t x, uint32_t y)
{
    uchar Y = rsGetElementAt_uchar(Yplane, x + y * Yline);
    uchar V = rsGetElementAt_uchar(Vplane, (x & ~1) + y/2 * UVline);
    uchar U = rsGetElementAt_uchar(Uplane, (x & ~1) + y/2 * UVline);
    // https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion
    short R = Y + (512           + 1436 * V) / 1024; //             1.402
    short G = Y + (512 -  352 * U - 731 * V) / 1024; // -0.344136  -0.714136
    short B = Y + (512 + 1815 * U          ) / 1024; //  1.772
    if (R < 0) R == 0; else if (R > 255) R == 255;
    if (G < 0) G == 0; else if (G > 255) G == 255;
    if (B < 0) B == 0; else if (B > 255) B == 255;
    return (uchar4){R, G, B, 255};
}

uchar4 __attribute__((kernel)) YUV420toRGB_180(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Width - 1 - x, Height - 1 - y);
}

uchar4 __attribute__((kernel)) YUV420toRGB_90(uint32_t x, uint32_t y)
{
    return YUV420toRGB(y, Width - x - 1);
}

uchar4 __attribute__((kernel)) YUV420toRGB_270(uint32_t x, uint32_t y)
{
    return YUV420toRGB(Height - 1 - y, x);
}

My Java wrapper was used in Flutter, but this does not really matter:

public class YuvConverter implements AutoCloseable {

    private RenderScript rs;
    private ScriptC_yuv2rgb scriptC_yuv2rgb;
    private Bitmap bmp;

    YuvConverter(Context ctx, int ySize, int uvSize, int width, int height) {
        rs = RenderScript.create(ctx);
        scriptC_yuv2rgb = new ScriptC_yuv2rgb(rs);
        init(ySize, uvSize, width, height);
    }

    private Allocation allocY, allocU, allocV, allocOut;

    @Override
    public void close() {
        if (allocY != null) allocY.destroy();
        if (allocU != null) allocU.destroy();
        if (allocV != null) allocV.destroy();
        if (allocOut != null) allocOut.destroy();
        bmp = null;
        allocY = null;
        allocU = null;
        allocV = null;
        allocOut = null;
        scriptC_yuv2rgb.destroy();
        scriptC_yuv2rgb = null;
        rs = null;
    }

    private void init(int ySize, int uvSize, int width, int height) {
        if (bmp == null || bmp.getWidth() != width || bmp.getHeight() != height) {
            bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
            if (allocOut != null) allocOut.destroy();
            allocOut = null;
        }
        if (allocY == null || allocY.getBytesSize() != ySize) {
            if (allocY != null) allocY.destroy();
            Type.Builder yBuilder = new Type.Builder(rs, Element.U8(rs)).setX(ySize);
            allocY = Allocation.createTyped(rs, yBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocU == null || allocU.getBytesSize() != uvSize || allocV == null || allocV.getBytesSize() != uvSize ) {
            if (allocU != null) allocU.destroy();
            if (allocV != null) allocV.destroy();
            Type.Builder uvBuilder = new Type.Builder(rs, Element.U8(rs)).setX(uvSize);
            allocU = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
            allocV = Allocation.createTyped(rs, uvBuilder.create(), Allocation.USAGE_SCRIPT);
        }
        if (allocOut == null || allocOut.getBytesSize() != width*height*4) {
            Type rgbType = Type.createXY(rs, Element.RGBA_8888(rs), width, height);
            if (allocOut != null) allocOut.destroy();
            allocOut = Allocation.createTyped(rs, rgbType, Allocation.USAGE_SCRIPT);
        }
    }

    @Retention(RetentionPolicy.SOURCE)
    // Enumerate valid values for this interface
    @IntDef({Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180, Surface.ROTATION_270})
    // Create an interface for validating int types
    public @interface Rotation {}

    /**
     * Converts an YUV_420 image into Bitmap.
     * @param yPlane  byte[] of Y, with pixel stride 1
     * @param uPlane  byte[] of U, with pixel stride 2
     * @param vPlane  byte[] of V, with pixel stride 2
     * @param yLine   line stride of Y
     * @param uvLine  line stride of U and V
     * @param width   width of the output image (note that it is swapped with height for portrait rotation)
     * @param height  height of the output image
     * @param rotation  rotation to apply. ROTATION_90 is for portrait back-facing camera.
     * @return RGBA_8888 Bitmap image.
     */

    public Bitmap YUV420toRGB(byte[] yPlane, byte[] uPlane, byte[] vPlane,
                              int yLine, int uvLine, int width, int height,
                              @Rotation int rotation) {
        init(yPlane.length, uPlane.length, width, height);

        allocY.copyFrom(yPlane);
        allocU.copyFrom(uPlane);
        allocV.copyFrom(vPlane);
        scriptC_yuv2rgb.set_Width(width);
        scriptC_yuv2rgb.set_Height(height);
        scriptC_yuv2rgb.set_Yline(yLine);
        scriptC_yuv2rgb.set_UVline(uvLine);
        scriptC_yuv2rgb.set_Yplane(allocY);
        scriptC_yuv2rgb.set_Uplane(allocU);
        scriptC_yuv2rgb.set_Vplane(allocV);

        switch (rotation) {
            case Surface.ROTATION_0:
                scriptC_yuv2rgb.forEach_YUV420toRGB(allocOut);
                break;
            case Surface.ROTATION_90:
                scriptC_yuv2rgb.forEach_YUV420toRGB_90(allocOut);
                break;
            case Surface.ROTATION_180:
                scriptC_yuv2rgb.forEach_YUV420toRGB_180(allocOut);
                break;
            case Surface.ROTATION_270:
                scriptC_yuv2rgb.forEach_YUV420toRGB_270(allocOut);
                break;
        }

        allocOut.copyTo(bmp);
        return bmp;
    }
}

The key to performance is that renderscript can be initialized once (that's why YuvConverter.init() is public) and the following calls are very fast.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...