From 8147bfb055435ac2b281e3b889c384bc87597f5c Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 18:12:43 +0000 Subject: [PATCH 1/7] Selfie Android LP review --- .../2-app-scaffolding.md | 2 +- .../3-camera-permission.md | 118 ------------------ .../4-introduce-mediapipe.md | 40 +++--- .../_index.md | 6 +- 4 files changed, 24 insertions(+), 142 deletions(-) delete mode 100644 content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md index f64377100..02dd95ffc 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md @@ -12,7 +12,7 @@ This learning path will teach you to architect an app following [modern Android Download and install the latest version of [Android Studio](https://developer.android.com/studio/) on your host machine. -This learning path's instructions and screenshots are taken on macOS with Apple Silicon, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install). +The instructions for this learning path were tested on a Apple Silicon host machine running macOS, but you may choose any of the supported hardware systems as described [here](https://developer.android.com/studio/install). Upon first installation, open Android Studio and proceed with the default or recommended settings. Accept license agreements and let Android Studio download all the required assets. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md deleted file mode 100644 index 121436976..000000000 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: Handle camera permission -weight: 3 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Run the app on your device - -1. Connect your Android device to your computer via a USB **data** cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist: - - 1. You have enabled **USB debugging** on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging). - - 2. You have confirmed by tapping "OK" on your Android device when an **"Allow USB debugging"** dialog pops up, and checked "Always allow from this computer". - - ![Allow USB debugging dialog](https://ftc-docs.firstinspires.org/en/latest/_images/AllowUSBDebugging.jpg) - - -2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the **"Run"** button to build and run, as described [here](https://developer.android.com/studio/run). - -3. After waiting for a while, you should be seeing success notification in Android Studio and the app showing up on your Android device. - -4. However, the app shows only a black screen while printing error messages in your [Logcat](https://developer.android.com/tools/logcat) which looks like this: - -``` -2024-11-20 11:15:00.398 18782-18818 Camera2CameraImpl com.example.holisticselfiedemo E Camera reopening attempted for 10000ms without success. -2024-11-20 11:30:13.560 667-707 BufferQueueProducer pid-667 E [SurfaceView - com.example.holisticselfiedemo/com.example.holisticselfiedemo.MainActivity#0](id:29b00000283,api:4,p:2657,c:667) queueBuffer: BufferQueue has been abandoned -2024-11-20 11:36:13.100 20487-20499 isticselfiedem com.example.holisticselfiedemo E Failed to read message from agent control socket! Retrying: Bad file descriptor -2024-11-20 11:43:03.408 2709-3807 PackageManager pid-2709 E Permission android.permission.CAMERA isn't requested by package com.example.holisticselfiedemo -``` - -5. Worry not. This is expected behavior because we haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet, therefore Android OS restricts this app's access to camera features due to privacy reasons. - -## Request camera permission at runtime - -1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `` element. Make sure it's **outside** and **above** `` element. - -```xml - - -``` - -2. Navigate to `strings.xml` in your `app` subproject's `src/main/res/values` path. Insert the following lines of text resources, which will be used later. - -```xml - Camera permission is required to recognize face and hands - To grant Camera permission to this app, please go to system settings -``` - -3. Navigate to `MainActivity.kt` and add the following permission related values to companion object: - -```kotlin - // Permissions - private val PERMISSIONS_REQUIRED = arrayOf(Manifest.permission.CAMERA) - private const val REQUEST_CODE_CAMERA_PERMISSION = 233 -``` - -4. Add a new method named `hasPermissions()` to check on runtime whether camera permission has been granted: - -```kotlin - private fun hasPermissions(context: Context) = PERMISSIONS_REQUIRED.all { - ContextCompat.checkSelfPermission(context, it) == PackageManager.PERMISSION_GRANTED - } -``` - -5. Add a condition check in `onCreate()` wrapping `setupCamera()` method, to request camera permission on runtime. - -```kotlin - if (!hasPermissions(baseContext)) { - requestPermissions( - arrayOf(Manifest.permission.CAMERA), - REQUEST_CODE_CAMERA_PERMISSION - ) - } else { - setupCamera() - } -``` - -6. Override `onRequestPermissionsResult` method to handle permission request results: - -```kotlin - override fun onRequestPermissionsResult( - requestCode: Int, - permissions: Array, - grantResults: IntArray - ) { - when (requestCode) { - REQUEST_CODE_CAMERA_PERMISSION -> { - if (PackageManager.PERMISSION_GRANTED == grantResults.getOrNull(0)) { - setupCamera() - } else { - val messageResId = - if (shouldShowRequestPermissionRationale(Manifest.permission.CAMERA)) - R.string.permission_request_camera_rationale - else - R.string.permission_request_camera_message - Toast.makeText(baseContext, getString(messageResId), Toast.LENGTH_LONG).show() - } - } - else -> super.onRequestPermissionsResult(requestCode, permissions, grantResults) - } - } -``` - -## Verify camera permission - -1. Rebuild and run the app. Now you should be seeing a dialog pops up requesting camera permissions! - -2. Tap `Allow` or `While using the app` (depending on your Android OS versions), then you should be seeing your own face in the camera preview. Good job! - -{{% notice Tip %}} -Sometimes you might need to restart the app to observe the permission change take effect. -{{% /notice %}} - -In the next chapter, we will introduce MediaPipe vision solutions. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md index 0c743ef94..c5f14c073 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/4-introduce-mediapipe.md @@ -8,9 +8,9 @@ layout: learningpathall [MediaPipe Solutions](https://ai.google.dev/edge/mediapipe/solutions/guide) provides a suite of libraries and tools for you to quickly apply artificial intelligence (AI) and machine learning (ML) techniques in your applications. -MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web / JavaScript, Python, etc. +MediaPipe Tasks provides the core programming interface of the MediaPipe Solutions suite, including a set of libraries for deploying innovative ML solutions onto devices with a minimum of code. It supports multiple platforms, including Android, Web, JavaScript, Python, etc. -## Introduce MediaPipe dependencies +## Add MediaPipe dependencies 1. Navigate to `libs.versions.toml` and append the following line to the end of `[versions]` section. This defines the version of MediaPipe library we will be using. @@ -19,16 +19,16 @@ mediapipe-vision = "0.10.15" ``` {{% notice Note %}} -Please stick with this version and do not use newer versions due to bugs and unexpected behaviors. +Please use this version and do not use newer versions as this introduces bugs and unexpected behavior. {{% /notice %}} -2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency. +2. Append the following lines to the end of `[libraries]` section. This declares MediaPipe's vision dependency: ```toml mediapipe-vision = { group = "com.google.mediapipe", name = "tasks-vision", version.ref = "mediapipe-vision" } ``` -3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, ideally between `implementation` and `testImplementation`. +3. Navigate to `build.gradle.kts` in your project's `app` directory, then insert the following line into `dependencies` block, between `implementation` and `testImplementation`. ```kotlin implementation(libs.mediapipe.vision) @@ -36,40 +36,40 @@ implementation(libs.mediapipe.vision) ## Prepare model asset bundles -In this app, we will be using MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize. +In this app, you will use MediaPipe's [Face Landmark Detection](https://ai.google.dev/edge/mediapipe/solutions/vision/face_landmarker) and [Gesture Recognizer](https://ai.google.dev/edge/mediapipe/solutions/vision/gesture_recognizer) solutions, which requires their model asset bundle files to initialize. Choose one of the two options below that aligns best with your learning needs. -### Basic approach: manual downloading +### Basic approach: manual download -Simply download the following two files, then move them into the default asset directory: `app/src/main/assets`. +Download the following two files, then move them into the default asset directory: `app/src/main/assets`. -``` +```console https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task ``` {{% notice Tip %}} -You might need to create the `assets` directory if not exist. +You might need to create the `assets` directory if it does not exist. {{% /notice %}} ### Advanced approach: configure prebuild download tasks -Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, therefore we will introduce [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency. +Gradle doesn't come with a convenient [Task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) type to manage downloads, so you will use the [gradle-download-task](https://github.com/michel-kraemer/gradle-download-task) dependency. -1. Again, navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section. +1. Navigate to `libs.versions.toml`. Append `download = "5.6.0"` to `[versions]` section, and `de-undercouch-download = { id = "de.undercouch.download", version.ref = "download" }` to `[plugins]` section. -2. Again, navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the aforementioned _Download_ task plugin in this `app` subproject. +2. Navigate to `build.gradle.kts` in your project's `app` directory and append `alias(libs.plugins.de.undercouch.download)` to the `plugins` block. This enables the _Download_ task plugin in this `app` subproject. -4. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models. +3. Insert the following lines between `plugins` block and `android` block to define the constant values, including: asset directory path and the URLs for both models. ```kotlin val assetDir = "$projectDir/src/main/assets" val gestureTaskUrl = "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task" val faceTaskUrl = "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task" ``` -5. Insert `import de.undercouch.gradle.tasks.download.Download` into **the top of this file**, then append the following code to **the end of this file**, which hooks two _Download_ tasks to be executed before `preBuild`: +4. Insert `import de.undercouch.gradle.tasks.download.Download` to the top of this file, then append the following code to the end of this file, which hooks two _Download_ tasks to be executed before `preBuild`: ```kotlin tasks.register("downloadGestureTaskAsset") { @@ -97,11 +97,11 @@ tasks.named("preBuild") { Refer to [this section](2-app-scaffolding.md#enable-view-binding) if you need help. {{% /notice %}} -2. Now you should be seeing both model asset bundles in your `assets` directory, as shown below: +2. Now you should see both model asset bundles in your `assets` directory, as shown below: ![model asset bundles](images/4/model%20asset%20bundles.png) -3. Now you are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Actually, we have already implemented the code below for you based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it. +3. You are ready to import MediaPipe's Face Landmark Detection and Gesture Recognizer into the project. Example code is already implemented for ease of use based on [MediaPipe's sample code](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). Simply create a new file `HolisticRecognizerHelper.kt` placed in the source directory along with `MainActivity.kt`, then copy paste the code below into it. ```kotlin package com.example.holisticselfiedemo @@ -426,9 +426,9 @@ data class GestureResultBundle( ``` {{% notice Info %}} -In this learning path we are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera. +In this learning path you are only configuring the MediaPipe vision solutions to recognize one person with at most two hands in the camera. -If you'd like to experiment with more people, simply change the `FACES_COUNT` constant to be your desired value. +If you'd like to experiment with more people, change the `FACES_COUNT` constant to be your desired value. {{% /notice %}} -In the next chapter, we will connect the dots from this helper class to the UI layer via a ViewModel. +In the next section, you will connect the dots from this helper class to the UI layer via a ViewModel. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 19dd54f7b..40dfdf129 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -28,9 +28,9 @@ author_primary: Han Yin skilllevels: Beginner subjects: ML armips: - - ARM Cortex-A - - ARM Cortex-X - - ARM Mali GPU + - Cortex-A + - Cortex-X + - Mali GPU tools_software_languages: - mobile - Android Studio From 57808b142967c218ab5cc0abc68641451022c7d8 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:01:28 +0000 Subject: [PATCH 2/7] Android Selfie App LP review --- .../6-flow-data-to-view-1.md | 14 +++---- .../7-flow-data-to-view-2.md | 12 +++--- .../8-mediate-flows.md | 39 ++++--------------- .../9-avoid-redundant-requests.md | 12 +++--- 4 files changed, 27 insertions(+), 50 deletions(-) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md index 76029594e..f1e0c7daf 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/6-flow-data-to-view-1.md @@ -8,7 +8,7 @@ layout: learningpathall [SharedFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#sharedflow) and [StateFlow](https://developer.android.com/kotlin/flow/stateflow-and-sharedflow#stateflow) are [Kotlin Flow](https://developer.android.com/kotlin/flow) APIs that enable Flows to optimally emit state updates and emit values to multiple consumers. -In this learning path, you will have the opportunity to experiment with both `SharedFlow` and `StateFlow`. This chapter will focus on SharedFlow while the next chapter will focus on StateFlow. +In this learning path, you will experiment with both `SharedFlow` and `StateFlow`. This section will focus on SharedFlow while the next chapter will focus on StateFlow. `SharedFlow` is a general-purpose, hot flow that can emit values to multiple subscribers. It is highly configurable, allowing you to set the replay cache size, buffer capacity, etc. @@ -54,9 +54,9 @@ This `SharedFlow` is initialized with a replay size of `1`. This retains the mos ## Visualize face and gesture results -To visualize the results of Face Landmark Detection and Gesture Recognition tasks, we have prepared the following code for you based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples). +To visualize the results of Face Landmark Detection and Gesture Recognition tasks, based on [MediaPipe's samples](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples) follow the intructions in this section. -1. Create a new file named `FaceLandmarkerOverlayView.kt` and fill in the content below: +1. Create a new file named `FaceLandmarkerOverlayView.kt` and copy the content below: ```kotlin /* @@ -180,7 +180,7 @@ class FaceLandmarkerOverlayView(context: Context?, attrs: AttributeSet?) : ``` -2. Create a new file named `GestureOverlayView.kt` and fill in the content below: +2. Create a new file named `GestureOverlayView.kt` and copy the content below: ```kotlin /* @@ -302,7 +302,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) : ## Update UI in the view controller -1. Add the above two overlay views to `activity_main.xml` layout file: +1. Add the two overlay views to `activity_main.xml` layout file: ```xml ``` -2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, **below** `setupCamera()` method call. +2. Collect the new SharedFlow `uiEvents` in `MainActivity` by appending the code below to the end of `onCreate` method, below `setupCamera()` method call. ```kotlin lifecycleScope.launch { @@ -363,7 +363,7 @@ class GestureOverlayView(context: Context?, attrs: AttributeSet?) : } ``` -4. Build and run the app again. Now you should be seeing face and gesture overlays on top of the camera preview as shown below. Good job! +4. Build and run the app again. Now you should see face and gesture overlays on top of the camera preview as shown below. Good job! ![overlay views](images/6/overlay%20views.png) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md index ca61e998b..e2dd74cf5 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/7-flow-data-to-view-2.md @@ -25,7 +25,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s val gestureOk: StateFlow = _gestureOk ``` -2. Append the following constant values to `MainViewModel`'s companion object. In this demo app, we are only focusing on smiling faces and thumb-up gestures. +2. Append the following constant values to `MainViewModel`'s companion object. In this demo app, you will focus on smiling faces and thumb-up gestures. ```kotlin private const val FACE_CATEGORY_MOUTH_SMILE = "mouthSmile" @@ -75,7 +75,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s Gesture ``` -2. In the same directory, create a new resource file named `dimens.xml` if not exist, which is used to define layout related dimension values: +2. In the same directory, create a new resource file named `dimens.xml` if it does not exist. This file is used to define layout related dimension values: ```xml @@ -85,7 +85,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s ``` -3. Navigate to `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`, **below** the two overlay views which you just added in the previous chapter. +3. Navigate to `activity_main.xml` layout file and add the following code to the root `ConstraintLayout`. Add this code after the two overlay views which you just added in the previous section. ```xml ``` -4. Finally, navigate to `MainActivity.kt` and append the following code inside `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, **below** the `launch` block you just added in the previous chapter. This makes sure each of the **three** parallel `launch` runs in its own Coroutine concurrently without blocking each other. +4. Finally, navigate to `MainActivity.kt` and append the following code inside `repeatOnLifecycle(Lifecycle.State.RESUMED)` block, after the `launch` block you just added in the previous section. This makes sure each of the three parallel `launch` run in its own co-routine concurrently without blocking each other. ```kotlin launch { @@ -127,7 +127,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s } ``` -5. Build and run the app again. Now you should be seeing two switches on the bottom of the screen as shown below, which turns on and off while you smile and show thumb-up gestures. Good job! +5. Build and run the app again. Now you should see two switches on the bottom of the screen as shown below, which turn on and off while you smile and show thumb-up gestures. Good job! ![indicator UI](images/7/indicator%20ui.png) @@ -135,7 +135,7 @@ Therefore, `StateFlow` is a specialized type of `SharedFlow` that represents a s This app uses `SharedFlow` for dispatching overlay views' UI events without mandating a specific stateful model, which avoids redundant computation. Meanwhile, it uses `StateFlow` for dispatching condition switches' UI states, which prevents duplicated emission and consequent UI updates. -Here's a breakdown of the differences between `SharedFlow` and `StateFlow`: +Here's a overview of the differences between `SharedFlow` and `StateFlow`: | | SharedFlow | StateFlow | | --- | --- | --- | diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md index 4798ccadd..9bbfff018 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/8-mediate-flows.md @@ -6,7 +6,7 @@ weight: 8 layout: learningpathall --- -Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition. The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the [single source of truth](https://en.wikipedia.org/wiki/Single_source_of_truth) for its observers (collectors) to carry out corresponding actions. +Now you have two independent Flows indicating the conditions of face landmark detection and gesture recognition. The simplest multimodality strategy is to combine multiple source Flows into a single output Flow, which emits consolidated values as the single source of truth for its observers (collectors) to carry out corresponding actions. ## Combine two Flows into a single Flow @@ -33,9 +33,9 @@ Now you have two independent Flows indicating the conditions of face landmark de ``` {{% notice Note %}} -Kotlin Flow's [`combine`](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/combine.html) transformation is equivalent to ReactiveX's [`combineLatest`](https://reactivex.io/documentation/operators/combinelatest.html). It combines emissions from multiple observables, so that each time **any** observable emits, the combinator function is called with the latest values from all sources. +Kotlin Flow's [`combine`](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/combine.html) transformation is equivalent to ReactiveX's [`combineLatest`](https://reactivex.io/documentation/operators/combinelatest.html). It combines emissions from multiple observables, so that each time any observable emits, the combinator function is called with the latest values from all sources. -You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is still in preview. For more information on similar transformations, please refer to [this blog](https://kt.academy/article/cc-flow-combine). +You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is still in preview. {{% /notice %}} @@ -49,30 +49,7 @@ You might need to add `@OptIn(FlowPreview::class)` annotation since `sample` is .shareIn(viewModelScope, SharingStarted.WhileSubscribed()) ``` -If this code looks confusing to you, please see the explanations below for Kotlin beginners. - -{{% notice Info %}} - -###### Keyword "it" - -The operation `filter { it }` is simplified from `filter { bothOk -> bothOk == true }`. - -Since Kotlin allows for implictly calling the single parameter in a lambda `it`, `{ bothOk -> bothOk == true }` is equivalent to `{ it == true }`, and again `{ it }`. - -See [this doc](https://kotlinlang.org/docs/lambdas.html#it-implicit-name-of-a-single-parameter) for more details. - -{{% /notice %}} - -{{% notice Info %}} - -###### "Unit" type -This `SharedFlow` has a generic type `Unit`, which doesn't contain any value. You may think of it as a "pulse" signal. - -The operation `map { }` simply maps the upstream `Boolean` value emitted from `_bothOk` to `Unit` regardless their values are true or false. It's simplified from `map { bothOk -> Unit }`, which becomes `map { Unit } ` where the keyword `it` is not used at all. Since an empty block already returns `Unit` implicitly, we don't need to explicitly return it. - -{{% /notice %}} - -If this still looks confusing, you may also opt to use `SharedFlow` and remove the `map { }` operation. Just note that when you collect this Flow, it doesn't matter whether the emitted `Boolean` values are true or false. In fact, they are always `true` due to the `filter` operation. +You may also opt to use `SharedFlow` and remove the `map { }` operation. Just note that when you collect this Flow, it doesn't matter whether the emitted `Boolean` values are true or false. In fact, they are always `true` due to the `filter` operation. ## Configure ImageCapture use case @@ -92,7 +69,7 @@ If this still looks confusing, you may also opt to use `SharedFlow` and .build() ``` -3. Again, don't forget to append this use case to `bindToLifecycle`. +3. Append this use case to `bindToLifecycle`. ```kotlin camera = cameraProvider.bindToLifecycle( @@ -102,7 +79,7 @@ If this still looks confusing, you may also opt to use `SharedFlow` and ## Execute photo capture with ImageCapture -1. Append the following constant values to `MainActivity`'s companion object. They define the file name format and the [MIME type](https://en.wikipedia.org/wiki/Media_type). +1. Append the following constant values to `MainActivity`'s companion object. They define the file name format and the media type: ```kotlin // Image capture @@ -165,7 +142,7 @@ If this still looks confusing, you may also opt to use `SharedFlow` and ## Add a flash effect upon capturing photo -1. Navigate to `activity_main.xml` layout file and insert the following `View` element **between** the two overlay views and two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface. +1. Navigate to `activity_main.xml` layout file and insert the following `View` element between the two overlay views and two `SwitchCompat` views. This is essentially just a white blank view covering the whole surface. ``` ` and } ``` -3. Invoke `showFlashEffect()` method in `executeCapturePhoto()` method, **before** invoking `imageCapture.takePicture()` +3. Invoke `showFlashEffect()` method in `executeCapturePhoto()` method, before invoking `imageCapture.takePicture()` 4. Build and run the app. Try keeping up a smiling face while presenting thumb-up gestures. When you see both switches turn on and stay stable for approximately half a second, the screen should flash white and then a photo should be captured and shows up in your album, which may take a few seconds depending on your Android device's hardware. Good job! diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md index 99608ce13..b4b58ed8b 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md @@ -1,16 +1,16 @@ --- -title: Avoid duplicated photo capture requests +title: Avoid duplicate photo capture requests weight: 9 ### FIXED, DO NOT MODIFY layout: learningpathall --- -So far, we have implemented the core logic for mediating MediaPipe's face and gesture task results and executing photo captures. However, the view controller does not communicate its execution results back to the view model. This introduces risks such as photo capture failures, frequent or duplicate requests, and other potential issues. +So far, you have implemented the core logic for MediaPipe's face and gesture task results and executing photo captures. However, the view controller does not communicate its execution results back to the view model. This introduces risks such as photo capture failures, frequent or duplicate requests, and other potential issues. ## Introduce camera readiness state -It is a best practice to complete the data flow cycle by providing callbacks for the view controller's states. This ensures that the view model does not emit values in undesired states, such as when the camera is busy or unavailable. +It is best practice to complete the data flow cycle by providing callbacks for the view controller's states. This ensures that the view model does not emit values in undesired states, such as when the camera is busy or unavailable. 1. Navigate to `MainViewModel` and add a `MutableStateFlow` named `_isCameraReady` as a private member variable. This keeps track of whether the camera is busy or unavailable. @@ -58,7 +58,7 @@ The duration of image capture can vary across Android devices due to hardware di To address this, implementing a simple cooldown mechanism after each photo capture can enhance the user experience while conserving computing resources. -1. Add the following constant value to `MainViewModel`'s companion object. This defines a `3` sec cooldown before marking the camera available again. +1. Add the following constant value to `MainViewModel`'s companion object. This defines a 3 seconds cooldown before making the camera available again. ```kotlin private const val IMAGE_CAPTURE_DEFAULT_COUNTDOWN = 3000L @@ -91,6 +91,6 @@ However, silently failing without notifying the user is not a good practice for ## Completed sample code on GitHub -If you run into any difficulties completing this learning path, feel free to check out the [completed sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio. +If you run into any difficulties completing this learning path, you can check out the [complete sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio. -If you discover a bug, encounter an issue, or have suggestions for improvement, we’d love to hear from you! Please feel free to [open an issue](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality/issues/new) with detailed information. +If you discover a bug, encounter an issue, or have suggestions for improvement, please feel free to [open an issue](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality/issues/new) with detailed information. From 01327d0b1fce85e3baf361b8d96359c226e5e908 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:28:03 +0000 Subject: [PATCH 3/7] Android Selfie LP review --- .../_index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 40dfdf129..ba2a32c66 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,3 +1,4 @@ +more :q! --- title: Build a Hands-Free Selfie app with Modern Android Development and MediaPipe Multimodal AI draft: true From 6819c157c06b2880534e5c0b499a0322f914c49f Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:35:09 +0000 Subject: [PATCH 4/7] Android selfie App review --- .../3-camera-permission.md | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md new file mode 100644 index 000000000..80262dfae --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/3-camera-permission.md @@ -0,0 +1,118 @@ +--- +title: Handle camera permissions +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Run the app on your device + +1. Connect your Android device to your computer via a USB data cable. If this is your first time running and debugging Android apps, follow [this guide](https://developer.android.com/studio/run/device#setting-up) and double check this checklist: + + 1. You have enabled USB debugging on your Android device following [this doc](https://developer.android.com/studio/debug/dev-options#Enable-debugging). + + 2. You have confirmed by tapping "OK" on your Android device when an "Allow USB debugging" dialog pops up, and checked "Always allow from this computer". + + ![Allow USB debugging dialog](https://ftc-docs.firstinspires.org/en/latest/_images/AllowUSBDebugging.jpg) + + +2. Make sure your device model name and SDK version correctly show up on the top right toolbar. Click the "Run" button to build and run the app. + +3. After a while, you should see a success notification in Android Studio and the app showing up on your Android device. + +4. However, the app shows only a black screen while printing error messages in your [Logcat](https://developer.android.com/tools/logcat) which looks like this: + +``` +2024-11-20 11:15:00.398 18782-18818 Camera2CameraImpl com.example.holisticselfiedemo E Camera reopening attempted for 10000ms without success. +2024-11-20 11:30:13.560 667-707 BufferQueueProducer pid-667 E [SurfaceView - com.example.holisticselfiedemo/com.example.holisticselfiedemo.MainActivity#0](id:29b00000283,api:4,p:2657,c:667) queueBuffer: BufferQueue has been abandoned +2024-11-20 11:36:13.100 20487-20499 isticselfiedem com.example.holisticselfiedemo E Failed to read message from agent control socket! Retrying: Bad file descriptor +2024-11-20 11:43:03.408 2709-3807 PackageManager pid-2709 E Permission android.permission.CAMERA isn't requested by package com.example.holisticselfiedemo +``` + +5. Do not worry. This is expected behavior because you haven't correctly configured this app's [permissions](https://developer.android.com/guide/topics/permissions/overview) yet. Android OS restricts this app's access to camera features due to privacy reasons. + +## Request camera permission at runtime + +1. Navigate to `manifest.xml` in your `app` subproject's `src/main` path. Declare camera hardware and permission by inserting the following lines into the `` element. Make sure it's declared outside and above `` element. + +```xml + + +``` + +2. Navigate to `strings.xml` in your `app` subproject's `src/main/res/values` path. Insert the following lines of text resources, which will be used later. + +```xml + Camera permission is required to recognize face and hands + To grant Camera permission to this app, please go to system settings +``` + +3. Navigate to `MainActivity.kt` and add the following permission related values to companion object: + +```kotlin + // Permissions + private val PERMISSIONS_REQUIRED = arrayOf(Manifest.permission.CAMERA) + private const val REQUEST_CODE_CAMERA_PERMISSION = 233 +``` + +4. Add a new method named `hasPermissions()` to check on runtime whether camera permission has been granted: + +```kotlin + private fun hasPermissions(context: Context) = PERMISSIONS_REQUIRED.all { + ContextCompat.checkSelfPermission(context, it) == PackageManager.PERMISSION_GRANTED + } +``` + +5. Add a condition check in `onCreate()` wrapping `setupCamera()` method, to request camera permission on runtime. + +```kotlin + if (!hasPermissions(baseContext)) { + requestPermissions( + arrayOf(Manifest.permission.CAMERA), + REQUEST_CODE_CAMERA_PERMISSION + ) + } else { + setupCamera() + } +``` + +6. Override `onRequestPermissionsResult` method to handle permission request results: + +```kotlin + override fun onRequestPermissionsResult( + requestCode: Int, + permissions: Array, + grantResults: IntArray + ) { + when (requestCode) { + REQUEST_CODE_CAMERA_PERMISSION -> { + if (PackageManager.PERMISSION_GRANTED == grantResults.getOrNull(0)) { + setupCamera() + } else { + val messageResId = + if (shouldShowRequestPermissionRationale(Manifest.permission.CAMERA)) + R.string.permission_request_camera_rationale + else + R.string.permission_request_camera_message + Toast.makeText(baseContext, getString(messageResId), Toast.LENGTH_LONG).show() + } + } + else -> super.onRequestPermissionsResult(requestCode, permissions, grantResults) + } + } +``` + +## Verify camera permission + +1. Rebuild and run the app. Now you should see a dialog pop up requesting camera permissions! + +2. Tap `Allow` or `While using the app` (depending on your Android OS versions). Then you should see your own face in the camera preview. Good job! + +{{% notice Tip %}} +Sometimes you might need to restart the app to observe the permission change take effect. +{{% /notice %}} + +In the next section, you will learn how to integrate MediaPipe vision solutions. From 453cdda978523916f355a905eade812501bb6e08 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:39:04 +0000 Subject: [PATCH 5/7] Android selfie app review --- .../_index.md | 25 +++++++++++-------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index ba2a32c66..56a2cf596 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,15 +1,16 @@ -more :q! --- -title: Build a Hands-Free Selfie app with Modern Android Development and MediaPipe Multimodal AI +title: Build a Hands-Free Selfie Android application with MediaPipe + draft: true cascade: draft: true -minutes_to_complete: 120 -who_is_this_for: This is an introductory topic for mobile application developers interested in learning how to build an Android selfie app with MediaPipe, Kotlin flows and CameraX, following the modern Android architecture design. +minutes_to_complete: 120 +who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Androi +d selfie app with MediaPipe, Kotlin flows and CameraX. -learning_objectives: +learning_objectives: - Architect a modern Android app with a focus on the UI layer. - Leverage lifecycle-aware components within the MVVM architecture. - Combine MediaPipe's face landmark detection and gesture recognition for a multimodel selfie solution. @@ -17,16 +18,16 @@ learning_objectives: - Use Kotlin Flow APIs to handle multiple asynchronous data streams. prerequisites: - - A development machine compatible with [**Android Studio**](https://developer.android.com/studio). - - A recent **physical** Android device (with **front camera**) and a USB **data** cable. + - A development machine with [**Android Studio**](https://developer.android.com/studio) installed. + - A recent Arm powered Android phone (with **front camera**) and a USB data cable. - Familiarity with Android development concepts. - - Basic knowledge of modern Android architecture. - - Basic knowledge of Kotlin programming language, such as [coroutines](https://kotlinlang.org/docs/coroutines-overview.html) and [flows](https://kotlinlang.org/docs/flow.html). + - Basic knowledge of Kotlin programming language, such as [coroutines](https://kotlinlang.org/docs/coroutines-overview +.html) and [flows](https://kotlinlang.org/docs/flow.html). author_primary: Han Yin ### Tags -skilllevels: Beginner +skilllevels: Advanced subjects: ML armips: - Cortex-A @@ -45,5 +46,7 @@ operatingsystems: # ================================================================================ weight: 1 # _index.md always has weight of 1 to order correctly layout: "learningpathall" # All files under learning paths have this same wrapper -learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of lear +ning path content. --- + From 3686393e165026a7cbdb71467ec66266e6cf86e0 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:51:12 +0000 Subject: [PATCH 6/7] Android selfie LP review --- .../2-app-scaffolding.md | 10 +++++----- .../9-avoid-redundant-requests.md | 2 +- .../_index.md | 13 +++---------- 3 files changed, 9 insertions(+), 16 deletions(-) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md index 02dd95ffc..999a8cbdb 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/2-app-scaffolding.md @@ -1,5 +1,5 @@ --- -title: Scaffold a new Android project +title: Create a new Android project weight: 2 ### FIXED, DO NOT MODIFY @@ -26,12 +26,12 @@ Before you proceed to coding, here are some tips that might come handy: ## Create a new Android project -1. Navigate to **File > New > New Project...**. +1. Navigate to File > New > New Project.... -2. Select **Empty Views Activity** in **Phone and Tablet** galary as shown below, then click **Next**. +2. Select Empty Views Activity in the Phone and Tablet gallery as shown below, then click Next. ![Empty Views Activity](images/2/empty%20project.png) -3. Proceed with a cool project name and default configurations as shown below. Make sure that **Language** is set to **Kotlin**, and that **Build configuration language** is set to **Kotlin DSL**. +3. Enter a project name and use the default configurations as shown below. Make sure that Language is set to Kotlin, and that Build configuration language is set to Kotlin DSL. ![Project configuration](images/2/project%20config.png) ### Introduce CameraX dependencies @@ -194,4 +194,4 @@ private fun bindCameraUseCases() { } ``` -In the next chapter, we will build and run the app to make sure the camera works well. +In the next section, you will build and run the app to make sure the camera works well. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md index b4b58ed8b..13d998eb4 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/9-avoid-redundant-requests.md @@ -89,7 +89,7 @@ However, silently failing without notifying the user is not a good practice for {{% /notice %}} -## Completed sample code on GitHub +## Entire sample code on GitHub If you run into any difficulties completing this learning path, you can check out the [complete sample code](https://github.com/hanyin-arm/sample-android-selfie-app-using-mediapipe-multimodality) and import it into Android Studio. diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 56a2cf596..3ae0f79bf 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,14 +1,9 @@ --- title: Build a Hands-Free Selfie Android application with MediaPipe -draft: true -cascade: - draft: true - minutes_to_complete: 120 -who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Androi -d selfie app with MediaPipe, Kotlin flows and CameraX. +who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Android selfie application with MediaPipe, Kotlin flows and CameraX. learning_objectives: - Architect a modern Android app with a focus on the UI layer. @@ -21,8 +16,7 @@ prerequisites: - A development machine with [**Android Studio**](https://developer.android.com/studio) installed. - A recent Arm powered Android phone (with **front camera**) and a USB data cable. - Familiarity with Android development concepts. - - Basic knowledge of Kotlin programming language, such as [coroutines](https://kotlinlang.org/docs/coroutines-overview -.html) and [flows](https://kotlinlang.org/docs/flow.html). + - Basic knowledge of Kotlin programming language. author_primary: Han Yin @@ -46,7 +40,6 @@ operatingsystems: # ================================================================================ weight: 1 # _index.md always has weight of 1 to order correctly layout: "learningpathall" # All files under learning paths have this same wrapper -learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of lear -ning path content. +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. --- From 72039b85d6c9fa855c301bacfc6654d6b0722af4 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Mon, 23 Dec 2024 20:51:50 +0000 Subject: [PATCH 7/7] Android selfie LP review --- .../_index.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md index 3ae0f79bf..bbaad71db 100644 --- a/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md +++ b/content/learning-paths/smartphones-and-mobile/build-android-selfie-app-using-mediapipe-multimodality/_index.md @@ -1,6 +1,10 @@ --- title: Build a Hands-Free Selfie Android application with MediaPipe +draft: true +cascade: + draft: true + minutes_to_complete: 120 who_is_this_for: This is an advanced topic for mobile application developers interested in learning how to build an Android selfie application with MediaPipe, Kotlin flows and CameraX.