Frames are in points, not pixels. When a scale factor is applied to the UIView hosting your CAEAGLLayer, it will be double the pixels, but its frame point size will remain the same.
If you look at the backing width and height for the color renderbuffer attached to the CAEAGLLayer using code like the following:
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
you should see that the width and height of the renderbuffer on a Retina display are twice the values they are on a standard iPhone display.
The code you show above should cause nice, sharp rendering on a Retina display.
EDIT (12/22/2010): In response to your further question, looking at the avTouch sample code shows that the current version of that code makes a mistake in looking up the bounds of the OpenGL-hosting view, rather than using the backing width and height of the renderbuffer. With a non-1.0 scale factor, this will cause the OpenGL content to be drawn at half size.
To fix this, replace the appropriate section within _drawView
in GLLevelMeter with the following code:
if (_vertical)
{
glTranslatef(0., _backingWidth, 0.);
glScalef(1., -1., 1.);
bds = CGRectMake(0., 0., _backingWidth, _backingHeight);
} else {
glTranslatef(0., _backingHeight, 0.);
glRotatef(-90., 0., 0., 1.);
bds = CGRectMake(0., 0., _backingHeight, _backingWidth);
}
This will cause everything to work in the appropriate pixel space, and lead to crisp rendering at the correct size in the equalizer. I've filed a bug report on this.