Skip to content

Commit

Permalink
Merge pull request #1478 from pareenaverma/content_review
Browse files Browse the repository at this point in the history
Android Selfie LP review
  • Loading branch information
pareenaverma authored Dec 23, 2024
2 parents b2a5fdf + 72039b8 commit c7657f5
Show file tree
Hide file tree
Showing 8 changed files with 78 additions and 100 deletions.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Scaffold a new Android project
title: Create a new Android project
weight: 2

### FIXED, DO NOT MODIFY
Expand All @@ -12,7 +12,7 @@ This learning path will teach you to architect an app following [modern Android

Download and install the latest version of [Android Studio](https://developer.android.com/studio/) on your host machine.

This learning path's instructions and screenshots are taken on macOS with Apple Silicon, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).
The instructions for this learning path were tested on a Apple Silicon host machine running macOS, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install).

Upon first installation, open Android Studio and proceed with the default or recommended settings. Accept license agreements and let Android Studio download all the required assets.

Expand All @@ -26,12 +26,12 @@ Before you proceed to coding, here are some tips that might come handy:

## Create a new Android project

1. Navigate to **File > New > New Project...**.
1. Navigate to File > New > New Project....

2. Select **Empty Views Activity** in **Phone and Tablet** galary as shown below, then click **Next**.
2. Select Empty Views Activity in the Phone and Tablet gallery as shown below, then click Next.
![Empty Views Activity](images/2/empty%20project.png)

3. Proceed with a cool project name and default configurations as shown below. Make sure that **Language** is set to **Kotlin**, and that **Build configuration language** is set to **Kotlin DSL**.
3. Enter a project name and use the default configurations as shown below. Make sure that Language is set to Kotlin, and that Build configuration language is set to Kotlin DSL.
![Project configuration](images/2/project%20config.png)

### Introduce CameraX dependencies
Expand Down Expand Up @@ -194,4 +194,4 @@ private fun bindCameraUseCases() {
}
```

In the next chapter, we will build and run the app to make sure the camera works well.
In the next section, you will build and run the app to make sure the camera works well.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Handle camera permission
title: Handle camera permissions
weight: 3

### FIXED, DO NOT MODIFY
Expand All @@ -8,18 +8,18 @@ layout: learningpathall

## Run the app on your device

1. Connect your Android device to your computer via a USB **data** cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist:
1. Connect your Android device to your computer via a USB data cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist:

1. You have enabled **USB debugging** on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging).
1. You have enabled USB debugging on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging).

2. You have confirmed by tapping "OK" on your Android device when an **"Allow USB debugging"** dialog pops up, and checked "Always allow from this computer".
2. You have confirmed by tapping "OK" on your Android device when an "Allow USB debugging" dialog pops up, and checked "Always allow from this computer".

![Allow USB debugging dialog](https://ftc-docs.firstinspires.org/en/latest/_images/AllowUSBDebugging.jpg)


2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the **"Run"** button to build and run, as described [here](https://developer.android.com/studio/run).
2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the "Run" button to build and run the app.

3. After waiting for a while, you should be seeing success notification in Android Studio and the app showing up on your Android device.
3. After a while, you should see a success notification in Android Studio and the app showing up on your Android device.

4. However, the app shows only a black screen while printing error messages in your [Logcat](https://developer.android.com/tools/logcat) which looks like this:

Expand All @@ -30,11 +30,11 @@ layout: learningpathall
2024-11-20 11:43:03.408 2709-3807 PackageManager pid-2709 E Permission android.permission.CAMERA isn't requested by package com.example.holisticselfiedemo
```

5. Worry not. This is expected behavior because we haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet, therefore Android OS restricts this app's access to camera features due to privacy reasons.
5. Do not worry. This is expected behavior because you haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet. Android OS restricts this app's access to camera features due to privacy reasons.

## Request camera permission at runtime

1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `<manifest>` element. Make sure it's **outside** and **above** `<application>` element.
1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `<manifest>` element. Make sure it's declared outside and above `<application>` element.

```xml
<uses-feature
Expand Down Expand Up @@ -107,12 +107,12 @@ layout: learningpathall

## Verify camera permission

1. Rebuild and run the app. Now you should be seeing a dialog pops up requesting camera permissions!
1. Rebuild and run the app. Now you should see a dialog pop up requesting camera permissions!

2. Tap `Allow` or `While using the app` (depending on your Android OS versions), then you should be seeing your own face in the camera preview. Good job!
2. Tap `Allow` or `While using the app` (depending on your Android OS versions). Then you should see your own face in the camera preview. Good job!

{{% notice Tip %}}
Sometimes you might need to restart the app to observe the permission change take effect.
{{% /notice %}}

In the next chapter, we will introduce MediaPipe vision solutions.
In the next section, you will learn how to integrate MediaPipe vision solutions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ layout: learningpathall

[MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide) provides a suite of libraries and tools for you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques in your applications.

MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web / JavaScript, Python, etc.
MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web, JavaScript, Python, etc.

## Introduce MediaPipe dependencies
## Add MediaPipe dependencies

1. Navigate to `libs.versions.toml` and append the following line to the end of `[versions]` section. This defines the version of MediaPipe library we will be using.

Expand All @@ -19,57 +19,57 @@ mediapipe-vision = "0.10.15"
```

{{% notice Note %}}
Please stick with this version and do not use newer versions due to bugs and unexpected behaviors.
Please use this version and do not use newer versions as this introduces bugs and unexpected behavior.
{{% /notice %}}

2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency.
2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency:

```toml
mediapipe-vision = { group = "com.google.mediapipe", name = "tasks-vision", version.ref = "mediapipe-vision" }
```

3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, ideally between `implementation` and `testImplementation`.
3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, between `implementation` and `testImplementation`.

```kotlin
implementation(libs.mediapipe.vision)
```

## Prepare model asset bundles

In this app, we will be using MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.
In this app, you will use MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize.

Choose one of the two options below that aligns best with your learning needs.

### Basic approach: manual downloading
### Basic approach: manual download

Simply download the following two files, then move them into the default asset directory: `app/src/main/assets`.
Download the following two files, then move them into the default asset directory: `app/src/main/assets`.

```
```console
https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task

https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task
```

{{% notice Tip %}}
You might need to create the `assets` directory if not exist.
You might need to create the `assets` directory if it does not exist.
{{% /notice %}}

### Advanced approach: configure prebuild download tasks

Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, therefore we will introduce [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.
Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, so you will use the [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency.

1. Again, navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.
1. Navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section.

2. Again, navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the aforementioned _Download_ task plugin in this `app` subproject.
2. Navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the _Download_ task plugin in this `app` subproject.

4. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
3. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models.
```kotlin
val assetDir = "$projectDir/src/main/assets"
val gestureTaskUrl = "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task"
val faceTaskUrl = "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task"
```

5. Insert `import de.undercouch.gradle.tasks.download.Download` into **the top of this file**, then append the following code to **the end of this file**, which hooks two _Download_ tasks to be executed before `preBuild`:
4. Insert `import de.undercouch.gradle.tasks.download.Download` to the top of this file, then append the following code to the end of this file, which hooks two _Download_ tasks to be executed before `preBuild`:

```kotlin
tasks.register<Download>("downloadGestureTaskAsset") {
Expand Down Expand Up @@ -97,11 +97,11 @@ tasks.named("preBuild") {
Refer to [this section](2-app-scaffolding.md#enable-view-binding) if you need help.
{{% /notice %}}

2. Now you should be seeing both model asset bundles in your `assets` directory, as shown below:
2. Now you should see both model asset bundles in your `assets` directory, as shown below:

![model asset bundles](images/4/model%20asset%20bundles.png)

3. Now you are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Actually, we have already implemented the code below for you based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.
3. You are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Example code is already implemented for ease of use based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it.

```kotlin
package com.example.holisticselfiedemo
Expand Down Expand Up @@ -426,9 +426,9 @@ data class GestureResultBundle(
```

{{% notice Info %}}
In this learning path we are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.
In this learning path you are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera.

If you'd like to experiment with more people, simply change the `FACES_COUNT` constant to be your desired value.
If you'd like to experiment with more people, change the `FACES_COUNT` constant to be your desired value.
{{% /notice %}}

In the next chapter, we will connect the dots from this helper class to the UI layer via a ViewModel.
In the next section, you will connect the dots from this helper class to the UI layer via a ViewModel.
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ layout: learningpathall

[SharedFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#sharedflow) and [StateFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#stateflow) are [Kotlin Flow](https://developer.android.com/kotlin/flow) APIs that enable Flows to optimally emit state updates and emit values to multiple consumers.

In this learning path, you will have the opportunity to experiment with both `SharedFlow` and `StateFlow`. This chapter will focus on SharedFlow while the next chapter will focus on StateFlow.
In this learning path, you will experiment with both `SharedFlow` and `StateFlow`. This section will focus on SharedFlow while the next chapter will focus on StateFlow.

`SharedFlow` is a general-purpose, hot flow that can emit values to multiple subscribers. It is highly configurable, allowing you to set the replay cache size, buffer capacity, etc.

Expand Down Expand Up @@ -54,9 +54,9 @@ This `SharedFlow` is initialized with a replay size of `1`. This retains the mos

## Visualize face and gesture results

To visualize the results of Face Landmark Detection and Gesture Recognition tasks, we have prepared the following code for you based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples).
To visualize the results of Face Landmark Detection and Gesture Recognition tasks, based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples) follow the intructions in this section.

1. Create a new file named `FaceLandmarkerOverlayView.kt` and fill in the content below:
1. Create a new file named `FaceLandmarkerOverlayView.kt` and copy the content below:

```kotlin
/*
Expand Down Expand Up @@ -180,7 +180,7 @@ class FaceLandmarkerOverlayView(context: Context?, attrs: AttributeSet?) :
```


2. Create a new file named `GestureOverlayView.kt` and fill in the content below:
2. Create a new file named `GestureOverlayView.kt` and copy the content below:

```kotlin
/*
Expand Down Expand Up @@ -302,7 +302,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :

## Update UI in the view controller

1. Add the above two overlay views to `activity_main.xml` layout file:
1. Add the two overlay views to `activity_main.xml` layout file:

```xml
<com.example.holisticselfiedemo.FaceLandmarkerOverlayView
Expand All @@ -316,7 +316,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
android:layout_height="match_parent" />
```

2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, **below** `setupCamera()` method call.
2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, below `setupCamera()` method call.

```kotlin
lifecycleScope.launch {
Expand Down Expand Up @@ -363,7 +363,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) :
}
```

4. Build and run the app again. Now you should be seeing face and gesture overlays on top of the camera preview as shown below. Good job!
4. Build and run the app again. Now you should see face and gesture overlays on top of the camera preview as shown below. Good job!

![overlay views](images/6/overlay%20views.png)

Loading

0 comments on commit c7657f5

Please sign in to comment.