I'm looking through the Apple's Vision API documentation and I see a couple of classes that relate to text detection in UIImages:
UIImages
1) class VNDetectTextRectanglesRequest
class VNDetectTextRectanglesRequest
2) class VNTextObservation
class VNTextObservation
It looks like they can detect characters, but I don't see a means to do anything with the characters. Once you've got characters detected, how would you go about turning them into something that can be interpreted by NSLinguisticTagger?
NSLinguisticTagger
Here's a post that is a brief overview of Vision.
Vision
Thank you for reading.
SwiftOCR
I just got SwiftOCR to work with small sets of text.
https://github.com/garnele007/SwiftOCR
uses
https://github.com/Swift-AI/Swift-AI
which uses NeuralNet-MNIST model for text recognition.
TODO : VNTextObservation > SwiftOCR
Will post example of it using VNTextObservation once I have it one connected to the other.
OpenCV + Tesseract OCR
I tried to use OpenCV + Tesseract but got compile errors then found SwiftOCR.
SEE ALSO : Google Vision iOS
Note Google Vision Text Recognition - Android sdk has text detection but also has iOS cocoapod. So keep an eye on it as should add text recognition to the iOS eventually.
https://developers.google.com/vision/text-overview
//Correction: just tried it but only Android version of the sdk supports text detection.
If you subscribe to releases: https://libraries.io/cocoapods/GoogleMobileVision
Click SUBSCRIBE TO RELEASES you can see when TextDetection is added to the iOS part of the Cocoapod
1.4m articles
1.4m replys
5 comments
57.0k users