Probably the fastest way to do this would be to use OpenGL ES 2.0 shaders to apply the threshold to your image. My GPUImage framework encapsulates this so that you don't need to worry about the more technical aspects behind the scenes.
Using GPUImage, you could obtain a thresholded version of your UIImage using a GPUImageLuminanceThresholdFilter and code like the following:
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageLuminanceThresholdFilter *stillImageFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
stillImageFilter.threshold = 0.5;
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *imageWithAppliedThreshold = [stillImageFilter imageFromCurrentFramebuffer];
You can just pass your color image into this, because this automatically extracts the luminance from each pixel and applies the threshold to that. Any pixel above the threshold goes to white, and any one below that is black. You can adjust the threshold to meet your particular conditions.
However, an even better choice for something you're going to pass into Tesseract would be my GPUImageAdaptiveThresholdFilter, which can be used in the same way as the GPUImageLuminanceThresholdFilter, only without a threshold value. The adaptive thresholding does a thresholding operation based on a 9 pixel region around the current pixel, adjusting for local lighting conditions. This is specifically designed to help with OCR applications, so it might be the way to go here.
Examples images from both types of filters can be found in this answer.
Note that the roundtrip through UIImage is slower than handling raw data, so these filters are much faster when acting on direct video or movie sources, and can run in realtime for those inputs. I also have a raw pixel data output, which might be faster for use with Tesseract.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…