So, I have/had quite the same ideas as fmw42 mentions in the comments, but instead of alpha blending I was thinking of plain linear blending using appropriate "blend masks" (which are the inverted masks you would use for alpha blending).
For the sake of simplicity, I assume two images with identical image sizes here. As fmw42 mentioned, you should use the "interesting" image parts here, for example obtained by cropping. Let's have a look at the code:
import cv2
import numpy as np
# Some input images
img1 = cv2.resize(cv2.imread('path/to/your/image1.png'), (400, 300))
img2 = cv2.resize(cv2.imread('path/to/your/image2.png'), (400, 300))
# Generate blend masks, here: linear, horizontal fading from 1 to 0 and from 0 to 1
mask1 = np.repeat(np.tile(np.linspace(1, 0, img1.shape[1]), (img1.shape[0], 1))[:, :, np.newaxis], 3, axis=2)
mask2 = np.repeat(np.tile(np.linspace(0, 1, img2.shape[1]), (img2.shape[0], 1))[:, :, np.newaxis], 3, axis=2)
# Generate output by linear blending
final = np.uint8(img1 * mask1 + img2 * mask2)
# Outputs
cv2.imshow('img1', img1)
cv2.imshow('img2', img2)
cv2.imshow('mask1', mask1)
cv2.imshow('mask2', mask2)
cv2.imshow('final', final)
cv2.waitKey(0)
cv2.destroyAllWindows()
These are the inputs and masks:
This would be the output:
The linear "blend masks" are created by NumPy's linspace
method, and some repeating of the vector by NumPy's tile
and repeat
methods. Maybe, that part can be further optimized.
Caveat: At least for the presented linear blending, ensure for every pixel you generate by
mask1[y, x] * img1[y, x] + mask2[y, x] * img2[y, x]
that
mask1[y, x] + mask2[y, x] <= 1
or you might get some "over-exposure" for these pixels.
Hope that helps!