Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
255 views
in Technique[技术] by (71.8m points)

iphone - Most efficient way to draw part of an image in iOS

Given an UIImage and a CGRect, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect (without scaling)?

For reference, this is how I currently do it:

- (void)drawRect:(CGRect)rect {
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);    
    CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
    CGContextTranslateCTM(context, 0, rect.size.height);
    CGContextScaleCTM(context, 1.0, -1.0);
    CGContextDrawImage(context, rect, imageRef);
    CGImageRelease(imageRef);
}

Unfortunately this seems extremely slow with medium-sized images and a high setNeedsDisplay frequency. Playing with UIImageView's frame and clipToBounds produces better results (with less flexibility).

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView. And optimization problems always need defining specifically.


Trust Apple for Regular UI stuff

Actually, UIImageView with clipsToBounds is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplay message.

Or you can try putting the UIImageView inside of an empty UIView and set clipping at the container view. With this technique, you can transform your image freely by setting transform property in 2D (scaling, rotation, translation).

If you need 3D transformation, you still can use CALayer with masksToBounds property, but using CALayer will give you very little extra performance usually not considerable.

Anyway, you need to know all of the low-level details to use them properly for optimization.


Why is that one of the fastest ways?

UIView is just a thin layer on top of CALayer which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.

So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGL implementation. If you use just a few images to display, you'll get acceptable performance with UIView implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).

Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayer lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.


Why your method is slower than UIImageView?

What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.

IMO, CGImage drawing methods are not implemented with GPU. I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImage is implemented in CPU because,

  1. Its API looks like it was designed for CPU, such as bitmap editing interface and text drawing. They don't fit to a GPU interface very well.
  2. Bitmap context interface allows direct memory access. That means it's backend storage is located in CPU memory. Maybe somewhat different on unified memory architecture (and also with Metal API), but anyway, initial design intention of CGImage should be for CPU.
  3. Many recently released other Apple APIs mentioning GPU acceleration explicitly. That means their older APIs were not. If there's no special mention, it's usually done in CPU by default.

So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.

Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.


About Limitations

Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).

If you need to display huge images with clipping, you have to use another optimization like CATiledLayer and that's a totally different story.

And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.


About Some Future

Though it is not very likely happen, but CGImage stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...