OpenCV's remap()
uses a real-valued index grid to sample a grid of values from an image using bilinear interpolation, and returns the grid of samples as a new image.
To be precise, let:
A = an image
X = a grid of real-valued X coords into the image.
Y = a grid of real-valued Y coords into the image.
B = remap(A, X, Y)
Then for all pixel coordinates i, j,
B[i, j] = A(X[i, j], Y[i, j])
Where the round-braces notation A(x, y)
denotes using bilinear interpolation to solve for the pixel value of image A using float-valued coords x
and y
.
My question is: given an index grid X
, Y
, how can I generate an "inverse grid" X^-1
, Y^-1
such that:
X(X^-1[i, j], Y^-1[i, j]) = i
Y(X^-1[i, j], Y^-1[i, j]) = j
And
X^-1(X[i, j], Y[i, j]) = i
Y^-1(X[i, j], Y[i, j]) = j
For all integer pixel coordinates i, j
?
FWIW, the image and index maps X and Y are the same shape. However, there is no a priori structure to the index maps X and Y. For example, they're not necessarily affine or rigid transforms. They may even be uninvertible, e.g. if X, Y
maps multiple pixels in A
to the same exact pixel coordinate in B. I'm looking for ideas for a method that will find a reasonable inverse map if one exists.
The solution need not be OpenCV-based, as I'm not using OpenCV, but another library that has a remap()
implementation. While any suggestions are welcome, I'm particularly keen on something that's "mathematically correct", i.e. if my map M is perfectly invertible, the method should find the perfect inverse, within some small margin of machine precision.
See Question&Answers more detail:
os