With DirectX, you can display an image as a quad (rectangle) with a texture, and you modify the texture (as an image) accordingly to your needs.
You can use any DX (9,10,11,12) to do so, but it's a lot of work to display just a 256x256 pixels image.
Here you have sample that achieve this: https://github.com/microsoft/DirectXTK/wiki/Getting-Started,
in particular the sprites and texture https://github.com/microsoft/DirectXTK/wiki/Sprites-and-textures.
You can get some help here: https://github.com/microsoft/DirectXTK/wiki/DeviceResources.
I however find the sample not so direct to display a image that is computed.
If you're looking for the lowest latency possible (sub-millisecond), or being able to display in 10bits (you can only do so with DX), having a cuda enabled GPU you can display and modify an image all in the GPU with DX12: https://github.com/mprevot/CudaD3D12Update.
This is a case of cuda-DX12 interop. In that case you minimize the back and forth between the CPU and the GPU since the texture is computed and displayed in the GPU. The CPU is only here to orchestrize the events.
The quad is defined as such:
TexVertex quadVertices[] =
{
{ {-x,-y, 0.0f }, { 0.0f, 0.0f } },
{ {-x, y, 0.0f }, { 0.0f, 1.0f } },
{ {x, -y, 0.0f }, { 1.0f, 0.0f } },
{ {x, y, 0.0f }, { 1.0f, 1.0f } },
};
and then uploaded. The texture is defined as such:
TextureChannels = 4;
TextureWidth = m_width;
TextureHeight = m_height;
const auto textureSurface = TextureWidth * TextureHeight;
const auto texturePixels = textureSurface * TextureChannels;
const auto textureSizeBytes = sizeof(float)* texturePixels;
const auto texFormat = TextureChannels == 4 ? DXGI_FORMAT_R32G32B32A32_FLOAT : DXGI_FORMAT_R32G32B32_FLOAT;
const auto texDesc = CD3DX12_RESOURCE_DESC::Tex2D(texFormat, TextureWidth, TextureHeight, 1, 1, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_SIMULTANEOUS_ACCESS);
ThrowIfFailed(m_device->CreateCommittedResource(&CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), D3D12_HEAP_FLAG_SHARED,
&texDesc, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, nullptr, IID_PPV_ARGS(&TextureArray)));
NAME_D3D12_OBJECT(TextureArray);
Then the DX12 texture is imported into cuda as a surface, then you run a kernel (or shader in the language of DX) to modify it. The good thing is that you do not have to update it at a predefined frequency, the surface can be updated and displayed at will, an only then.
If you do not have a cuda GPU, you can just change the texture from the CPU (cf DX samples).
However, even though you did not ask for this, I believe it will be a loat easier to use something like a WPF framework to display (and compute) such image, the latencies will be a lot bigger but very acceptable, and you can only display 8bit images. In the back WPF is relying on DX, but to my knowledge one can't modify its configuration. That's the way to go to get a result and to play with that ASAP .