short story:
This is a follow up question to: Fast Way to slice image into overlapping patches and merge patches to image
How must I adapt the code provided in the answer to work not only on images of size x,y, where a pixel is described by a float, but described by a matrix of size 3,3?
Further, how to adapt the code so that it returns a generator allowing me to iterate over all patches without having to save all of them in memory?
long story:
Given an image of shape (x,y), where each pixel is described by a (3,3) matrix. This can be described as a matrix of shape (x,y,3,3).
Further given a target patchsize such as (11,11), I want to extract all overlapping patches from the image (x,y).
Note that I do not want to get all patches from the matrix x,y,3,3 but from the image x,y where each pixel is a matrix.
I will want to use these patches for a patch classification algorithm, effectively iterating over all patches, extracting features and learning a classifier. Yet given a huge image and large patchsize, there is no way to perform this operation without hurting the limitation of the memory.
Possible solutions:
Therefore the question is: How can I adapt this code to fit the new input data?
def patchify(img, patch_shape):
img = np.ascontiguousarray(img) # won't make a copy if not needed
X, Y = img.shape
x, y = patch_shape
shape = ((X-x+1), (Y-y+1), x, y) # number of patches, patch_shape
# The right strides can be thought by:
# 1) Thinking of `img` as a chunk of memory in C order
# 2) Asking how many items through that chunk of memory are needed when indices
# i,j,k,l are incremented by one
strides = img.itemsize*np.array([Y, 1, Y, 1])
return np.lib.stride_tricks.as_strided(img, shape=shape, strides=strides)
See Question&Answers more detail:
os