I created a test app on an iPad 2 that loaded 200 384x384 pixels jpeg2000 image files (117,964,800 bytes worth of raw pixels) using the following three methods: [UIImage imageNamed]
, [UIImage imageWithContentsOfFile:]
and [UIImage imageWithData]
. The jpeg2000 file set was 100 textures which I then copied into an extra 100 files with a "copy" suffix, to see if iOS does any duplicate file checking, which it does. More on that down below.
The test was done in two steps
- Simply load the images and store them in an array.
- A separate button creates UIImageViews for each image and displays them.
Here are the results:
[UIImage imageNamed:]
Step 1: Memory only increased by about the sum total of all the jpeg2000 files (which were about 50K each, so memory went up by about 5 MB). I assume the duplicate files were not duplicated at this point and were somehow consolidated by iOS as memory would have gone up by 10MB at this point if there were no duplicate checking.
Step 2: Memory went up significantly (to about 200 MB), presumably because the images were decoded into BGRA format in preparation to display in UIImageView. It looks like there was no duplicate filtering at this stage and that separate raw memory was allocated for every image. I'm not sure why, but this was about 80 MB more than the actual raw pixel memory usage should have been.
[UIImage imageWithContentsOfFile:]
Step 1: Memory usage was identical to [UIImage imageNamed:]
, so there was duplicate filtering at this stage.
Step 2: Memory usage went up to 130 MB. For some reason this is 70 MB lower than [UIImage imageNamed:]
. This number is much closer to the expected amount of raw pixel memory for the 200 images.
[UIImage imageWithData:]
[NSData dataWithContentsOfFile:]
used first.
Step 1: Memory usage was 15 MB. I assume there was no duplicate filtering here as this is close to the total file size for all the jpeg2000 data.
Step 2: Memory usage went up to 139 MB. This is more than [UIImage imageWithContentsOfFile:]
, but not by much.
Summary
iOS appears to reference the compressed data for a UIImage
loaded using the above three methods until the raw pixels are actually needed, at which point it is decoded.
[UIImage imageNamed:]
never deallocated the memory because of all my image views referencing the images. It would have deallocated memory of non-referenced images had I staggered the loading and allowed the run loop to execute. One advantage is that repeated [UIImage imageNamed:]
calls to the same image were essentially free. Do not use this method for anything other than GUI images, or you may run out of memory.
[UIImage imageWithContentsOfFile:]
behaves like [UIImage imageNamed:]
in memory usage until the raw pixels are needed, at which point it is much more efficient in memory usage for some reason. This method also causes the memory to be freed immediately when the UIImage is deallocated. Repeated calls to [UIImage imageWithContentsOfFile:]
with the same file appear to use a cached copy until all the UIImage
's referencing the file are deallocated.
[UIImage imageWithData:]
does no caching or duplicate checking and always creates a new image.
I tested the same set as PNG files and the step 1 results for imageNamed and imageWithContentsOfFile showed even less memory being used (about 0.5 MB), and imageWithData showed the sum total of all the PNG files compressed. My guess is that iOS simply stores a reference to the file and doesn't do anything else with it until decoding time. The step 2 results for PNG were identical.