Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
188 views
in Technique[技术] by (71.8m points)

java - Android: how to display camera preview with callback?

What I need to do is quite simple, I want to manually display preview from camera using camera callback and I want to get at least 15fps on a real device. I don't even need the colors, I just need to preview grayscale image. Images from camera are in YUV format and you have to process it somehow, which is the main performance problem. I'm using API 8.

In all cases I'm using camera.setPreviewCallbackWithBuffer(), that is faster than camera.setPreviewCallback(). It seems that I cant get about 24 fps here, if I'm not displaying the preview. So there is not the problem.

I have tried these solutions:

1. Display camera preview on a SurfaceView as a Bitmap. It works, but the performance is about 6fps.

baos = new ByteOutputStream();
yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);

yuvimage.compressToJpeg(new Rect(0, 0, prevX, prevY), 80, baos);
jdata = baos.toByteArray();

bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length); // Convert to Bitmap, this is the main issue, it takes a lot of time

canvas.drawBitmap(bmp , 0, 0, paint);


2. Display camera preview on a GLSurfaceView as a texture. Here I was displaying only luminance data (greyscale image), which is quite easy, it requires only one arraycopy() on each frame. I can get about 12fps, but I need to apply some filters to the preview and it seems, that it can't be done fast in OpenGL ES 1. So I can't use this solution. Some details of this in another question.


3. Display camera preview on a (GL)SurfaceView using NDK to process the YUV data. I find a solution here that uses some C function and NDK. But I didn't manage to use it, here some more details. But anyway, this solution is done to return ByteBuffer to display it as a texture in OpenGL and it won't be faster than the previous attempt. So I would have to modify it to return int[] array, that can be drawn with canvas.drawBitmap(), but I don't understand C enough to do this.


So, is there any other way that I'm missing or some improvement to the attempts I tried?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I'm working on exactly the same issue, but haven't got quite as far as you have.

Have you considered drawing the pixels directly to the canvas without encoding them to JPEG first? Inside the OpenCV kit http://sourceforge.net/projects/opencvlibrary/files/opencv-android/2.3.1/OpenCV-2.3.1-android-bin.tar.bz2/download (which doesn't actually use opencv; don't worry), there's a project called tutorial-0-androidcamera that demonstrates converting the YUV pixels to RGB and then writing them directly to a bitmap.

The relevant code is essentially:

public void onPreviewFrame(byte[] data, Camera camera, int width, int height) {
    int frameSize = width*height;
    int[] rgba = new int[frameSize+1];

    // Convert YUV to RGB
    for (int i = 0; i < height; i++)
        for (int j = 0; j < width; j++) {
            int y = (0xff & ((int) data[i * width + j]));
            int u = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 0]));
            int v = (0xff & ((int) data[frameSize + (i >> 1) * width + (j & ~1) + 1]));
            y = y < 16 ? 16 : y;

            int r = Math.round(1.164f * (y - 16) + 1.596f * (v - 128));
            int g = Math.round(1.164f * (y - 16) - 0.813f * (v - 128) - 0.391f * (u - 128));
            int b = Math.round(1.164f * (y - 16) + 2.018f * (u - 128));

            r = r < 0 ? 0 : (r > 255 ? 255 : r);
            g = g < 0 ? 0 : (g > 255 ? 255 : g);
            b = b < 0 ? 0 : (b > 255 ? 255 : b);

            rgba[i * width + j] = 0xff000000 + (b << 16) + (g << 8) + r;
        }

    Bitmap bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
    bmp.setPixels(rgba, 0/* offset */, width /* stride */, 0, 0, width, height);
    Canvas canvas = mHolder.lockCanvas();
    if (canvas != null) {
        canvas.drawBitmap(bmp, (canvas.getWidth() - width) / 2, (canvas.getHeight() - height) / 2, null);
        mHolder.unlockCanvasAndPost(canvas);
    } else {
        Log.w(TAG, "Canvas is null!");
    }
    bmp.recycle();
}

Of course you'd have to adapt it to meet your needs (ex. not allocating rgba each frame), but it might be a start. I'd love to see if it works for you or not -- i'm still fighting problems orthogonal to yours at the moment.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...