[Note: The code on the github example does not calculate the gradient on a pixel basis. The code on the github example calculates the gradient on a points basis. -Fattie]
The code is working in pixels. First, it fills a simple raster bitmap buffer with the pixel color data. That obviously has no notion of an image scale or unit other than pixels. Next, it creates a CGImage
from that buffer (in a bit of an odd way). CGImage
also has no notion of a scale or unit other than pixels.
The issue comes in where the CGImage
is drawn. Whether scaling is done at that point depends on the graphics context and how it has been configured. There's an implicit transform in the context that converts from user space (points, more or less) to device space (pixels).
The -drawInContext:
method ought to convert the rect using CGContextConvertRectToDeviceSpace()
to get the rect for the image. Note that the unconverted rect should still be used for the call to CGContextDrawImage()
.
So, for a 2x Retina display context, the original rect will be in points. Let's say 100x200. The image rect will be doubled in size to represent pixels, 200x400. The draw operation will draw that to the 100x200 rect, which might seem like it would scale the large, highly-detailed image down, losing information. However, internally, the draw operation will scale the target rect to device space before doing the actual draw, and fill a 200x400 pixel area from the 200x400 pixel image, preserving all of the detail.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…