Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
283 views
in Technique[技术] by (71.8m points)

iphone - OpenGL ES 2d rendering into image

I need to write OpenGL ES 2-dimensional renderer on iOS. It should draw some primitives such as lines and polygons into 2d image (it will be rendering of vector map). Which way is the best for getting image from OpenGL context in that task? I mean, should I render these primitives into texture and then get image from it, or what? Also, it will be great if someone give examples or tutorials which look like the thing I need (2d GL rendering into image). Thanks in advance!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

If you need to render an OpenGL ES 2-D scene, then extract an image of that scene to use outside of OpenGL ES, you have two main options.

The first is to simply render your scene and use glReadPixels() to grab RGBA data for the scene and place it in a byte array, like in the following:

GLubyte *rawImagePixels = (GLubyte *)malloc(totalBytesForImage);
glReadPixels(0, 0, (int)currentFBOSize.width, (int)currentFBOSize.height, GL_RGBA, GL_UNSIGNED_BYTE, rawImagePixels);
// Do something with the image
free(rawImagePixels);

The second, and much faster, way of doing this is to render your scene to a texture-backed framebuffer object (FBO), where the texture has been provided by iOS 5.0's texture caches. I describe this approach in this answer, although I don't show the code for raw data access there.

You do the following to set up the texture cache and bind the FBO texture:

    CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)[[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context], NULL, &rawDataTextureCache);
    if (err) 
    {
        NSAssert(NO, @"Error at CVOpenGLESTextureCacheCreate %d");
    }

    // Code originally sourced from http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/

    CFDictionaryRef empty; // empty value for attr value.
    CFMutableDictionaryRef attrs;
    empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
                               NULL,
                               NULL,
                               0,
                               &kCFTypeDictionaryKeyCallBacks,
                               &kCFTypeDictionaryValueCallBacks);
    attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
                                      1,
                                      &kCFTypeDictionaryKeyCallBacks,
                                      &kCFTypeDictionaryValueCallBacks);

    CFDictionarySetValue(attrs,
                         kCVPixelBufferIOSurfacePropertiesKey,
                         empty);

    //CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);

    CVPixelBufferCreate(kCFAllocatorDefault, 
                        (int)imageSize.width, 
                        (int)imageSize.height,
                        kCVPixelFormatType_32BGRA,
                        attrs,
                        &renderTarget);

    CVOpenGLESTextureRef renderTexture;
    CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
                                                  rawDataTextureCache, renderTarget,
                                                  NULL, // texture attributes
                                                  GL_TEXTURE_2D,
                                                  GL_RGBA, // opengl format
                                                  (int)imageSize.width, 
                                                  (int)imageSize.height,
                                                  GL_BGRA, // native iOS format
                                                  GL_UNSIGNED_BYTE,
                                                  0,
                                                  &renderTexture);
    CFRelease(attrs);
    CFRelease(empty);
    glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);

and then you can just read directly from the bytes that back this texture (in BGRA format, not the RGBA of glReadPixels()) using something like:

    CVPixelBufferLockBaseAddress(renderTarget, 0);
    _rawBytesForImage = (GLubyte *)CVPixelBufferGetBaseAddress(renderTarget);
    // Do something with the bytes
    CVPixelBufferUnlockBaseAddress(renderTarget, 0);

However, if you just want to reuse your image within OpenGL ES, you just need to render your scene to a texture-backed FBO and then use that texture in your second level of rendering.

I show an example of rendering to a texture, and then performing some processing on it, within the CubeExample sample application within my open source GPUImage framework, if you want to see this in action.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...