The odd colours you are getting are because Format8bppIndexed
is paletted, and you never edit the palette, meaning it retains the default generated Windows palette. But in your case, this palette is irrelevant, because the image is not an 8-bit indexed format; it needs to be processed to convert it to RGB.
A quick google for BayerBG8 got me this page. The Bayer section there shows it's a rather peculiar transformation to use specifically patterned indices on the image as R, G and B.
Wikipedia has a whole article on how this stuff is generally processed, but this YouTube video shows the basics:
Note that this is a sliding window; for the first pixel, the colours are
R G
G B
but for the second pixel, they'll be
G R
B G
and for one row down, the first one will use
G B
R G
You'll end up with an image that is one pixel less wide and high than the given dimensions, since the last pixel on each row and all pixels on the last row won't have the neighbouring data needed to get their full pixel data. There are apparently more advanced algorithms to get around that, but for this method I'll just go over the basic sliding window method.
public static Byte[] BayerToRgb(Byte[] arr, ref Int32 width, ref Int32 height, ref Int32 stride, Boolean greenFirst, Boolean blueRowFirst)
{
Int32 actualWidth = width - 1;
Int32 actualHeight = height - 1;
Int32 actualStride = actualWidth*3;
Byte[] result = new Byte[actualStride*actualHeight];
for (Int32 y = 0; y < actualHeight; y++)
{
Int32 curPtr = y*stride;
Int32 resPtr = y*actualStride;
Boolean blueRow = y % 2 == (blueRowFirst ? 0 : 1);
for (Int32 x = 0; x < actualWidth; x++)
{
// Get correct colour components from sliding window
Boolean isGreen = (x + y) % 2 == (greenFirst ? 0 : 1);
Byte cornerCol1 = isGreen ? arr[curPtr + 1] : arr[curPtr];
Byte cornerCol2 = isGreen ? arr[curPtr + stride] : arr[curPtr + stride + 1];
Byte greenCol1 = isGreen ? arr[curPtr] : arr[curPtr + 1];
Byte greenCol2 = isGreen ? arr[curPtr + stride + 1] : arr[curPtr + stride];
Byte blueCol = blueRow ? cornerCol1 : cornerCol2;
Byte redCol = blueRow ? cornerCol2 : cornerCol1;
// 24bpp RGB is saved as [B, G, R].
// Blue
result[resPtr + 0] = blueCol;
// Green
result[resPtr + 1] = (Byte) ((greenCol1 + greenCol2)/2);
// Red
result[resPtr + 2] = redCol;
curPtr++;
resPtr+=3;
}
}
height = actualHeight;
width = actualWidth;
stride = actualStride;
return result;
}
The parameters greenFirst
and blueRowFirst
indicate whether green is the first encountered pixel on the image, and whether the blue pixels are on the first or second row. For your "BG" format, both of these should be false.
From the result of this, with the adjusted width, height and stride, you can convert that to a new image using the method you already used, but with Format24bppRgb
as pixel format.
Personally I use a somewhat more advanced method that takes the input stride into account and can handle indexed content. If you're interested, that method can be found here.
Note that the demosaicing method above is very basic, and will show many of the expected artifacts. There are more advanced methods out there to analyse the data and get more accurate results based on how the image was taken, but it'll probably cost you quite some research to figure all that out and implement it yourself.
Here's a little test I did, starting from a Bayer-filtered image I found online (first image) which I converted to an 8-bit array (shown here as grayscale; second image). As you can see, my own demosaicing (third image) is far less accurate than the corrected version they got out of it (fourth image), and, notably, is one pixel smaller and thus shows a white border.
(Note that, unlike the examples above, this image starts with a green pixel, meaning the parameters to decode it had to be adjusted)