$ npm install rn-text-detector --save
or yarn add rn-text-detector
- In
<your_project>/ios
create Podfile filepod init
- Add following in
ios/Podfile
pod 'yoga', :path => '../node_modules/react-native/ReactCommon/yoga'
pod 'React', :path => '../node_modules/react-native'
pod 'RNTextDetector', path: '../node_modules/react-native-text-detector/ios'
- Run following from project's root directory
pod update && pod install
- Use
<your_project>.xcworkspace
to run your app
- In XCode, in the project navigator, right click
Libraries
➜Add Files to [your project's name]
- Go to
node_modules
➜rn-text-detector
and addRNTextDetector.xcodeproj
- In XCode, in the project navigator, select your project. Add
libRNTextDetector.a
to your project'sBuild Phases
➜Link Binary With Libraries
- Run your project (
Cmd+R
)<
- Open up
android/app/src/main/java/[...]/MainApplication.java
- Add
import com.fetchsky.RNTextDetector.RNTextDetectorPackage;
to the imports at the top of the file - Add
new RNTextDetectorPackage()
to the list returned by thegetPackages()
method
-
Append the following lines to
android/settings.gradle
:include ':rn-text-detector' project(':rn-text-detector').projectDir = new File(rootProject.projectDir, '../node_modules/rn-text-detector/android')
-
Insert the following lines inside the dependencies block in
android/app/build.gradle
:... dependencies { implementation 'rn-text-detector' }
Follow MLkit documentation, https://developers.google.com/ml-kit/vision/text-recognition
/**
*
* This Example uses react-native-camera for getting image
*
*/
import RNTextDetector from "rn-text-detector";
export class TextDetectionComponent extends PureComponent {
...
detectText = async () => {
try {
const options = {
quality: 0.8,
base64: true,
skipProcessing: true,
};
const { uri } = await this.camera.takePictureAsync(options);
const visionResp = await RNTextDetector.detectFromUri(uri);
console.log('visionResp', visionResp);
} catch (e) {
console.warn(e);
}
};
...
}