Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
551 views
in Technique[技术] by (71.8m points)

opengl - Difference between format and internalformat

I did search and read stuff about this but couldn't understand it.

What's the difference between a texture internal format and format in a call like

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

?

Let's assume that data is an array of 32 x 32 pixel values where there are four bytes per each pixel (unsigned char data 0-255) for red, green, blue and alpha.

What's the difference between the first GL_RGBA and the second one? Why is GL_RGBA_INTEGER invalid in this context?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The format (7th argument), together with the type argument, describes the data you pass in as the last argument. So the format/type combination defines the memory layout of the data you pass in.

internalFormat (2nd argument) defines the format that OpenGL should use to store the data internally.

Often times, the two will be very similar. And in fact, it is beneficial to make the two formats directly compatible. Otherwise there will be a conversion while loading the data, which can hurt performance. Full OpenGL allows combinations that require conversions, while OpenGL ES limits the supported combinations so that conversions are not needed in most cases.

The reason GL_RGBA_INTEGER is not legal in this case that there are rules about which conversions between format and internalFormat are supported. In this case, GL_RGBA for the internalFormat specifies a normalized format, while GL_RGBA_INTEGER for format specifies that the input consists of values that should be used as integers. There is no conversion defined between these two.

While GL_RGBA for internalFormat is still supported for backwards compatibility, sized types are generally used for internalFormat in modern versions of OpenGL. For example, if you want to store the data as an 8-bit per component RGBA image, the value for internalFormat is GL_RGBA8.

Frankly, I think there would be cleaner ways of defining these APIs. But this is just the way it works. Partly it evolved this way to maintain backwards compatibility to OpenGL versions where features were much more limited. Newer versions of OpenGL add the glTexStorage*() entry points, which make some of this nicer because it separates the internal data allocation and the specification of the data.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...