This project lays out an application for Android based on artificial intelligence capable of recognizing the number of roses in an image given by the user, from either the phone's camera or gallery.
- Android Studio version Artic Fox 2020.3.1 Patch 3 or greater (for cloning and editing the app)
- Android 6.0 Marshmallow or greater (for installing and testing in an Android phone)
Home Screen | Test Images | Gallery Permissions | Gallery Functionality |
---|---|---|---|
Inference with Gallery Image | Camera Permissions | Camera Functionality | Inference with Camera Image |
---|---|---|---|
- Clone or download this repository:
git clone https://github.com/StadynR/object-detection-soft-eng
- Open Android Studio
- Click on File -> New -> Import Project
- First Time using Android Studio: Click Open an existing Android Studio project
- Search for the directory
object-detection-soft-eng
and select it. - If you get a Gradle Sync popup, click OK and wait for it to sync.
- Configure your phone to connect to Android Studio and build the app. See https://developer.android.com/studio/run/device for details.
- Connect yout phone to your computer (with an USB cable or through Wi-Fi depending on how you configured your phone).
- If everything was done correctly, you will see your phone's model name on the panel near the upper right corner.
- You can then install and run the app by selecting Run -> Run 'app', or by clicking the green play button right to your phone's model name. Don't disconnect your phone until the app is loaded.
- Download the latest MDT.Rose.Counter.apk from the Releases page.
- Copy the apk to your phone.
- In your phone, search for the apk and install it. If your phone asks for confirmation or permission to install the apk, accept.
In case you want to test the app using your own TFLite model you just need to replace the model.tflite
file in object-detection-soft-eng/app/src/main/assets
. If you need to change the detection threshold for the model, do it in line 182 of app/java/org.tensorflow.codelabs.objectdetection/MainActivity.kt
in Android Studio, inside .setScoreThreshold()
. You can then compile/run the app again in Android Studio, or generate a new apk by clicking Build -> Build Bundle(s)/APK(s) -> Build APK(s). The apk will be stored in object-detection-soft-eng/app/build/outputs/apk/debug/
, as app-debug.apk
.
Make sure the app is already installed in your phone.
- Find the application on your cell phone and open it.
- Tap Take Photo to take a photo using your phone's camera or Gallery to choose a photo from your phone's gallery. Only for the first time, give the permisions to access gallery or camera and tap the corresponding button again. Alternatively, you can touch the sample photos above the buttons to select them.
- The inference of the image will be done automatically, and the result will be shown in the center of the screen.
- You can then save the image using the Save Result button or start a new inference by repeating step 2. The photos saved from the app are stored in the Pictures/Inferences folder from your phone's internal storage.
Bugs can be reported in the issue tracker on our GitHub repo: https://github.com/StadynR/object-detection-soft-eng/issues/
- Stadyn Román - stadyn.roman@yachaytech.edu.ec - LinkedIn
- Argenis Andrade - argenis.andrade@yachaytech.edu.ec - LinkedIn
- Jefferson Chipantasig - jefferson.chipantasi@yachaytech.edu.ec - LinkedIn
- Jaime Astudillo - jaime.astudillo@yachaytech.edu.ec - LinkedIn
This project is licensed under the GPLv3 License. See the LICENSE file for the full license information.