diff --git a/content/install-guides/browsers/_index.md b/content/install-guides/browsers/_index.md index 4fa29170f..f8ab003a1 100644 --- a/content/install-guides/browsers/_index.md +++ b/content/install-guides/browsers/_index.md @@ -43,7 +43,7 @@ Here is a quick summary to get you started: | Brave | native | yes | | Chrome | native | no | | Edge | native | no | -| Vivaldi | emulation | yes | +| Vivaldi | native (snapshot) | yes | Windows on Arm runs native ARM64 applications, but can also emulate 32-bit x86 and 64-bit x64 applications. Emulation is slower than native and shortens battery life, but may provide functionality you need. diff --git a/content/install-guides/browsers/vivaldi.md b/content/install-guides/browsers/vivaldi.md index 221c7d696..9cf007f0b 100644 --- a/content/install-guides/browsers/vivaldi.md +++ b/content/install-guides/browsers/vivaldi.md @@ -25,10 +25,12 @@ layout: installtoolsall # DO NOT MODIFY. Always true for tool install ar ## Installing Vivaldi -The Vivaldi browser runs on Windows using emulation, but Vivaldi is available for Arm Linux. +Vivaldi is available for Arm Linux and Windows on Arm. Visit [Download Vivaldi](https://vivaldi.com/download/) to obtain packages for various operating systems. + + ### Linux Vivaldi is available for Arm Linux. @@ -64,7 +66,9 @@ sudo dnf install vivaldi-stable -y ### Windows -Vivaldi runs on Windows on Arm using emulation. +The stable release of Vivaldi runs on Windows using emulation, but there is a snapshot release for ARM64. + +Stable Vivaldi runs on Windows on Arm using emulation. Emulation is slower than native and shortens battery life, but may provide required functionality. @@ -75,6 +79,6 @@ Emulation is slower than native and shortens battery life, but may provide requi 3. Find and start Vivaldi from the applications menu - +The [snapshot release](https://downloads.vivaldi.com/snapshot/Vivaldi.6.9.3425.3.arm64.exe) is available as a native ARM64 application. Install the snapshot release using the same 3 steps above. diff --git a/content/learning-paths/cross-platform/gitlab/gitlab-runner.md b/content/learning-paths/cross-platform/gitlab/1-gitlab-runner.md similarity index 71% rename from content/learning-paths/cross-platform/gitlab/gitlab-runner.md rename to content/learning-paths/cross-platform/gitlab/1-gitlab-runner.md index 4dcdad85e..97d56ca78 100644 --- a/content/learning-paths/cross-platform/gitlab/gitlab-runner.md +++ b/content/learning-paths/cross-platform/gitlab/1-gitlab-runner.md @@ -1,5 +1,5 @@ --- -title: "Create Google Axion based GitLab self-hosted runner" +title: "Create a Google Axion-based GitLab self-hosted runner" weight: 2 @@ -7,26 +7,26 @@ layout: "learningpathall" --- ## What is a GitLab runner? -A GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline. It acts as an agent that executes the jobs defined in your GitLab CI/CD configuration. Some key points to note about GitLab Runner: +A GitLab Runner works with GitLab CI/CD to run jobs in a pipeline. It acts as an agent and executes the jobs you define in your GitLab CI/CD configuration. Some key points to note about GitLab Runner: -1. GitLab offers multiple types of runners - You can use GitLab-hosted runners, self-managed runners, or a combination of both. GitLab-hosted runners are managed by GitLab, while self-managed runners are installed and managed on your own infrastructure. +1. GitLab offers multiple types of runners - You can use GitLab-hosted runners, self-managed runners, or a combination of both. GitLab manages GitLab-hosted runners, while you install and manage self-managed runners on your own infrastructure. 2. Each runner is configured as an Executor - When you register a runner, you choose an executor, which determines the environment in which the job runs. Executors can be Docker, Shell, Kubernetes, etc. 3. Multi-architecture support: GitLab runners support multiple architectures including - `x86/amd64` and `arm64` ## What is Google Axion? -Google Axion is the latest generation of Arm Neoverse based CPU offered by Google Cloud. The VM instances are part of the `C4A` family of compute instances. To learn more about Google Axion refer to this [blog](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu). +Axion is Google’s first Arm-based server processor, built using the Armv9 Neoverse V2 CPU. The VM instances are part of the `C4A` family of compute instances. To learn more about Google Axion refer to this [blog](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu). Note: These `C4A` VM instances are in public preview and needs a signup to be enabled in your Google Cloud account/project. ## Install GitLab runner on a Google Axion VM -Create a repository in your GitLab account by clicking on "+" sign on top-left corner. Click on `New project/repository` and select a blank project, provide a name and initiate your project/repository. +Create a repository in your GitLab account by clicking the "+" sign on top-left corner. Click on `New project/repository` and select a blank project, provide a name and initiate your project/repository. ![repository #center](_images/repository.png) -Once the repository gets created, in the left-hand pane, navigate to `Settings->CI/CD`. Expand the `Runners` section and under `Project Runners`, select `New Project Runner`. +After you create the repository, navigate to `Settings->CI/CD` in the left-hand pane. Expand the `Runners` section and under `Project Runners`, select `New Project Runner`. ![arm64-runner #center](_images/create-gitlab-runner.png) @@ -57,7 +57,7 @@ Install the runner and run it as a service sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner sudo gitlab-runner start ``` -In the GitLab console, on the `Register Runner` page, copy the command from `Step 1` and run it on the `C4A` VM. It should look like below +In the GitLab console, on the `Register Runner` page, copy the command from `Step 1` and run it on the `C4A` VM. The command should look like this: ```console sudo gitlab-runner register --url https://gitlab.com --token ``` @@ -69,7 +69,7 @@ After answering the prompts, the runner should be registered and you should see Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! ``` -You should be able to see the newly registered runner in the `Runners` section of GitLab console like below +You should see the newly registered runner in the Runners section of the GitLab console as shown below. ![registered-runner #center](_images/registered-runner.png) diff --git a/content/learning-paths/cross-platform/gitlab/gitlab-cicd.md b/content/learning-paths/cross-platform/gitlab/2-gitlab-cicd.md similarity index 93% rename from content/learning-paths/cross-platform/gitlab/gitlab-cicd.md rename to content/learning-paths/cross-platform/gitlab/2-gitlab-cicd.md index f340d00ea..7580ff9bd 100644 --- a/content/learning-paths/cross-platform/gitlab/gitlab-cicd.md +++ b/content/learning-paths/cross-platform/gitlab/2-gitlab-cicd.md @@ -14,7 +14,7 @@ A multi-architecture application is designed to run on multiple architectures, t In this learning path you will use the `docker manifest` way to build a multi-architecture image. ## Create a Docker repository in Google Artifact Registry -You can create a docker repository in the Google Artifact Registry from the Google Cloud UI or with the following command: +You can create a Docker repository in the Google Artifact Registry from the Google Cloud UI or with the following command: ```console gcloud artifacts repositories create quickstart-docker-repo --repository-format=docker \ @@ -31,7 +31,7 @@ gcloud auth configure-docker us-central1-docker.pkg.dev ## Create project variables -Create the following variables in your GitLab repository by navigating to `CI/CD->Variables`. Expand the section and click on `Add Variable` +Create the following variables in your GitLab repository by navigating to `CI/CD->Variables`. Expand the section and click on `Add Variable`. - `GCP_PROJECT` - Your Google Cloud project ID that also hosts the Google Artifact registry - `GKE_ZONE` - Zone for your GKE cluster - e.g. - us-central1-c @@ -103,7 +103,7 @@ RUN apk add --update --no-cache file CMD [ "/hello" ] ``` -The `deployment.yaml` file references the multi-architecture docker image from the registry. Create the file using the contents below +The `deployment.yaml` file references the multi-architecture docker image from the registry. Create the file using the contents below. ```yml apiVersion: apps/v1 kind: Deployment @@ -142,7 +142,7 @@ spec: requests: cpu: 300m ``` -Create a Kubernetes Service of type `LoadBalancer` with the following `hello-service.yaml` contents +Create a Kubernetes Service of type `LoadBalancer` with the following `hello-service.yaml` contents: ```yml apiVersion: v1 @@ -164,7 +164,7 @@ spec: ## Automate the GitLab CI/CD build process -In this section, the individual docker images for each architecture - `amd64` and `arm64`- are built natively on their respective runner. For e.g. the `arm64` build is run on Google Axion runner and vice versa. Each runner is differentiated with `tags` in the pipeline. To build a CI/CD pipeline in GitLab, you'll need to create a `.gitlab-ci.yml` file in your repository. Use the following contents to populate the file +In this section, the individual Docker images for each architecture - `amd64` and `arm64`- are built natively on their respective runner. For example, the `arm64` build is run on Google Axion runner and vice versa. Each runner is differentiated with `tags` in the pipeline. To build a CI/CD pipeline in GitLab, you'll need to create a `.gitlab-ci.yml` file in your repository. Use the following contents to populate the file: ```yml workflow: @@ -220,7 +220,7 @@ deploy: - kubectl apply -f deployment.yaml ``` -This file has three `stages`. A `stage` is a set of commands that are executed in a sequence to achieve the desired result. In the `build` stage of the file there are two jobs that run in parallel. The `arm64-build` job gets executed on the Google Axion based C4A runner and the `amd64-build` job gets executed on an AMD64 based E2 runner. Both of these jobs push a docker image to the registry. +This file has three `stages`. A `stage` is a set of commands that are executed in a sequence to achieve the desired result. In the `build` stage of the file there are two jobs that run in parallel. The `arm64-build` job gets executed on the Google Axion based C4A runner and the `amd64-build` job gets executed on an AMD64 based E2 runner. Both of these jobs push a Docker image to the registry. In the `manifest` stage, using `docker manifest` command both of these images are joined to create a single multi-architecture image and pushed to the registry. diff --git a/content/learning-paths/cross-platform/gitlab/_index.md b/content/learning-paths/cross-platform/gitlab/_index.md index f390177e0..68a3cd7c6 100644 --- a/content/learning-paths/cross-platform/gitlab/_index.md +++ b/content/learning-paths/cross-platform/gitlab/_index.md @@ -1,5 +1,5 @@ --- -title: How to build a CI/CD pipeline with GitLab on Google Axion +title: Build a CI/CD pipeline with GitLab on Google Axion draft: true minutes_to_complete: 30 diff --git a/content/learning-paths/cross-platform/gitlab/_next-steps.md b/content/learning-paths/cross-platform/gitlab/_next-steps.md index e166b15bd..9b0b35a97 100644 --- a/content/learning-paths/cross-platform/gitlab/_next-steps.md +++ b/content/learning-paths/cross-platform/gitlab/_next-steps.md @@ -1,5 +1,5 @@ --- -next_step_guidance: You can continue learning about building applications with Arm. The learning path covering deployment of a multi-architecture application with GKE should be the next one in your list. +next_step_guidance: Continue learning about building applications with Arm by reviewing the learning path covering deployment of a multi-architecture application with GKE. recommended_path: "/learning-paths/servers-and-cloud-computing/gke-multi-arch" diff --git a/content/learning-paths/cross-platform/gitlab/_review.md b/content/learning-paths/cross-platform/gitlab/_review.md index 0c1ff0650..0008385ef 100644 --- a/content/learning-paths/cross-platform/gitlab/_review.md +++ b/content/learning-paths/cross-platform/gitlab/_review.md @@ -20,6 +20,18 @@ review: explanation: > GitLab allows executing jobs parallely on two different self-hosted runners with tags + - questions: + question: > + What is the primary role of a GitLab Runner in GitLab CI/CD? + answers: + - To manage the GitLab repository. + - To execute jobs defined in the GitLab CI/CD configuration. + - To provide multi-architecture support for different processors. + - To create virtual machines in Google Cloud. + correct_answer: 2 + explanation: > + To execute jobs defined in the GitLab CI/CD configuration. + # ================================================================================ # FIXED, DO NOT MODIFY diff --git a/content/learning-paths/cross-platform/simd-on-rust/simd-on-rust-part2.md b/content/learning-paths/cross-platform/simd-on-rust/simd-on-rust-part2.md index 304c31fb8..35662773c 100644 --- a/content/learning-paths/cross-platform/simd-on-rust/simd-on-rust-part2.md +++ b/content/learning-paths/cross-platform/simd-on-rust/simd-on-rust-part2.md @@ -128,7 +128,7 @@ Shown below is the disassembly output for the `sad_neon` function: You will notice the use of `uabd` and `udot` assembly instructions that correspond to the `vabdq_u8`/`vdotq_u32` intrinsics. -Now create an equivalent Rust program using `std::arch` `dotprod` intrinsics. Save the contents shown below in a file named `dotprod2.rs`: +Now create an equivalent Rust program using `std::arch` `neon` intrinsics. Save the contents shown below in a file named `dotprod2.rs`: ```Rust #![feature(stdarch_neon_dotprod)] @@ -166,7 +166,7 @@ fn sad_vec(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 { #[cfg(target_arch = "aarch64")] { use std::arch::is_aarch64_feature_detected; - if is_aarch64_feature_detected!("dotprod") { + if is_aarch64_feature_detected!("neon") { return unsafe { sad_vec_asimd(a, b, w, h) }; } } @@ -175,7 +175,7 @@ fn sad_vec(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 { } #[cfg(target_arch = "aarch64")] -#[target_feature(enable = "dotprod")] +#[target_feature(enable = "neon")] unsafe fn sad_vec_asimd(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 { use std::arch::aarch64::*; @@ -291,13 +291,13 @@ As you have seen, Rust has a very particular way to enable target features. In t unsafe fn sad_vec_asimd(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 { ``` -Remember that `neon` support is implied with `dotprod` so there is no need to add it as well. However, this may not be the case for all extensions so you may have to add both, like this: +Add support for both `neon` and `dotprod` target features as shown: ```Rust #[target_feature(enable = "neon", enable = "dotprod")] ``` -Next, you will also need to add the `#!feature` for the module's code generation at the top of the file: +Next, check that you have added the `#!feature` for the module's code generation at the top of the file: ```Rust #![feature(stdarch_neon_dotprod)] diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_index.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_index.md index ba106dbd8..119efd780 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_index.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_index.md @@ -1,12 +1,12 @@ --- title: Deploy firmware on hybrid edge systems using containers -draft: true + minutes_to_complete: 20 -who_is_this_for: This learning path is dedicated to developers interested in learning how to deploy software (embedded applications and firmware) onto other processors in the system, using Linux running on the application core. +who_is_this_for: This learning path is for developers interested in learning how to deploy software (embedded applications and firmware) onto other processors in the system, using Linux running on the application core. learning_objectives: - - Deploy a containerized embedded application onto an Arm Cortex-M core from an Arm Cortex-A core using containerd and K3s. + - Deploy a containerized embedded application onto an Arm Cortex-M core from an Arm Cortex-A core using *containerd* and *K3s*. - Build a firmware container image. - Build the hybrid-runtime components. diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_next-steps.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_next-steps.md index d37227d51..341846817 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_next-steps.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_next-steps.md @@ -1,5 +1,5 @@ --- -next_step_guidance: If you are interested in learning more about deploying IoT applications using Arm Virtual Hardware, please continue the learning path below +next_step_guidance: If you are interested in learning more about deploying IoT applications using Arm Virtual Hardware, please continue to the learning path below recommended_path: /learning-paths/embedded-systems/avh_balena/ diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_review.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_review.md index bb558f3d8..58962768a 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_review.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/_review.md @@ -8,7 +8,7 @@ review: - "False" correct_answer: 2 explanation: > - The firmware runs on a Cortex-M, from containerd's perspective it appears to runs as a normal Linux container. + The firmware runs on a Cortex-M, from `containerd`'s perspective it appears to runs as a normal Linux container. - questions: question: > @@ -19,9 +19,10 @@ review: correct_answer: 2 explanation: > Each embedded core can only run a single container at once. + - questions: question: > - Which command is used to interact with the hybrid-runtime from containerd? + Which command is used to interact with the hybrid-runtime from `containerd`? answers: - ctr run --runtime io.containerd.hybrid hybrid_app_imx8mp:latest test-container - ctr run --runtime runc hybrid_app_imx8mp:latest test-container diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/avh-setup.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/avh-setup.md index 239b43cbe..e69385d8d 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/avh-setup.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/avh-setup.md @@ -10,24 +10,26 @@ layout: learningpathall AVH offers a 30-day free trial to use. - Create an account in [Arm Virtual Hardware](https://app.avh.arm.com/login) -- Once logged in, you should see a similar screen as shown below. -- Click on create device. +- Once logged in, you should see a similar screen as shown in the image below. Click on **Create device**: + ![create device alt-text#center](avh_images/avh1.png "Figure 1. Create device") -- Click on Default Project. +- Next, click on **Default Project**: + ![select project alt-text#center](avh_images/avh2.png "Figure 2. Select project") -- Select the i.MX 8M Plus device. The platform runs four Cortex-A53 processors. +- Select the **i.MX 8M Plus** device. The platform runs four Cortex-A53 processors: ![Select the i.MX 8M Plus device alt-text#center](avh_images/avh3.png "Figure 3. Select device") -- Select the Yocto Linux (full) (2.2.1) image. + +- Select the Yocto Linux (full) (2.2.1) image and click **Select**: + ![Select the Yocto Linux (full) (2.2.1) alt-text#center](avh_images/avh4.png "Figure 4. Select the Yocto Linux (full) (2.2.1) image") -- Click Select -![confirm create device alt-text#center](avh_images/avh5.png "Figure 5. Confirm create device") +- Click on **Create device** (note that this could take few minutes): -- Click on Create device, this could take few minutes. +![confirm create device alt-text#center](avh_images/avh5.png "Figure 5. Confirm create device") - A console to Linux running on the Cortex-A should appear. Use “root” to login. -- Find the IP address for the board model by running the command. This will be needed access the device using SSH. +- Find the IP address for the board model by running the following command (this will be needed to access the device using SSH): ```bash ip addr ``` @@ -39,14 +41,15 @@ The GUI on the right side may not work. You can safely ignore the error you see ### Useful AVH tips +The **Connect** pane shows the different ways that you can connect to the simulated board. The IP address specified should be the same as that visible in the output of the `ip addr` command. + ![AVH connect interface alt-text#center](avh_images/avh7.png "Figure 7. AVH connect interface") -The “Connect” pane shows the different ways that you can connect to the simulated board. The IP address specified should be the same as that visible in the output of the `ip addr` command. +**Quick Connect** lets you connect SSH to the AVH model without having to use a VPN configuration. Similarly, you can replace `ssh` for `scp` to copy files from and to the virtual device. In order to use Quick Connect, it is necessary to add your public key via the **Manage SSH keys here** link. -“Quick Connect” lets you SSH to the AVH model without having to use a VPN configuration. Similarly, you can replace `ssh` for `scp` to copy files from and to the virtual device. In order to use Quick Connect, it is necessary to add your public key via the `Manage SSH keys here` link. ![Generate SSH key alt-text#center](avh_images/avh8.png "Figure 8. Generate SSH key") -To generate an SSH key, you can run the following command on your machine. +To generate an SSH key, you can run the following command on your machine: ```bash ssh-keygen -t ed25519 ``` @@ -54,7 +57,7 @@ ssh-keygen -t ed25519 ## Download the pre-built hybrid-runtime -Once your AVH model is set up, you can download the pre-built hybrid-runtime. This GitHub package contains the runtime and some necessary scripts. +Once your AVH model is set up, you can download the pre-built hybrid-runtime. This GitHub package contains the runtime and some necessary scripts: ```console wget https://github.com/smarter-project/hybrid-runtime/releases/download/v1.5/hybrid.tar.gz @@ -72,12 +75,12 @@ If you want to build the hybrid-runtime on your own, instructions can be found i For this learning path, there is also a pre-built lightweight Docker container image available on GitHub. You can use it for the `i.MX8M-PLUS-EVK` board. The container image contains a simple FreeRTOS hello-world application built using the NXP SDK. -You can pull the pre-built image onto the AVH model by running the command. +You can pull the pre-built image onto the AVH model by running the following command: ```console ctr image pull ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest ``` -Make sure the container image was pulled successfully. An image with the name *ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest* should appear as an output. +Make sure the container image was pulled successfully. An image with the name *ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest* should appear as an output to the following: ```console ctr image ls diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/build-runtime.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/build-runtime.md index 5d45b6b13..29377b468 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/build-runtime.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/build-runtime.md @@ -1,5 +1,5 @@ --- -title: (optional) Building the hybrid-runtime and container image +title: Building the hybrid-runtime and container image (optional) weight: 7 ### FIXED, DO NOT MODIFY @@ -7,14 +7,14 @@ layout: learningpathall --- This section shows how you can build the hello world example on your Arm Linux host using a Docker container. Before getting started, you need to install Git, buildx and Docker on your Linux host. Follow the [Docker Engine install guide](/install-guides/docker/docker-engine/). -Install Git by running. +Install Git by running the following: ```bash sudo apt install git ``` ## Building the hybrid-runtime -Pull the hybrid runtime GitHub repository. +Pull the hybrid runtime GitHub repository: ```bash git clone https://github.com/smarter-project/hybrid-runtime.git ``` @@ -23,7 +23,7 @@ Navigate to the `hybrid-runtime/docker` directory and build the hybrid-runtime c cd hybrid-runtime/docker make all ``` -This builds the `hybrid-runtime` CLI and containerd `hybrid-shim`. +This builds the `hybrid-runtime` CLI and `containerd` `hybrid-shim`. Navigate to the `hybrid_runtime/cortexm_console` directory and build the application: ```bash @@ -37,7 +37,7 @@ You may have to be root to run Docker. Try adding `sudo` to the beginning of the This builds the helper application, `cortexm_console`, that can capture output from a Cortex-M application. The binary is created in the directory. -By default containerd will look in `/usr/local/bin` to find the hybrid runtime components. Therefore, copy the hybrid runtime components and the `cortexm_console` application into `/usr/local/bin` on the AVH model. +By default, `containerd` will look in `/usr/local/bin` to find the hybrid-runtime components. Therefore, copy the hybrid-runtime components and the `cortexm_console` application into `/usr/local/bin` on the AVH model. Copy the `containerd-shim-containerd-hybrid` to the board under `/usr/local/bin` using *SCP* and the IP address from before. @@ -46,7 +46,7 @@ If you are using a VPN configured setup then you can use SCP directly. For examp scp runtime/hybrid-shim/target/aarch64-unknown-linux-musl/debug/containerd-shim-containerd-hybrid root@10.11.0.1:/usr/local/bin/ scp cortexm_console/output/cortexm_console root@10.11.0.1:/usr/local/bin/ ``` -Otherwise you can use the information from the “Quick Connect” section (substituting your particular -J argument). For example: +Otherwise you can use the information from the **Quick Connect** section (substituting your particular -J argument). For example: ```bash scp -J 95757722-1eb3-4e69-8cad-54f0400aa3d2@proxy.app.avh.arm.com runtime/hybrid-shim/target/aarch64-unknown-linux-musl/debug/containerd-shim-containerd-hybrid root@10.11.0.1:/usr/local/bin/ scp -J 95757722-1eb3-4e69-8cad-54f0400aa3d2@proxy.app.avh.arm.com cortexm_console/output/cortexm_console root@10.11.0.1:/usr/local/bin/ @@ -58,15 +58,15 @@ You will need to edit the above commands to fit your IP address and paths. ## Building the hello-world firmware container image -Pull the runtime GitHub repository. +Pull the runtime GitHub repository: ```bash git clone https://github.com/smarter-project/hybrid-runtime.git ``` -Then run +Then run the following: ```bash cd hybrid-runtime/docker make image ``` -This should build a docker image with the name `ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest` +This should build a docker image with the name `ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest`. The `image.dockerfile` file pulls the NXP SDK and builds the hello world example present in the NXP SDK. Finally, the Dockerfile creates a new image, copies the hello world binary into it, sets the entrypoint and adds the correct labels. You can use this Dockerfile to run the example as shown in the section [Deploy firmware container using containerd](../containerd/). diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd.md index bcaa14124..823bc2429 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd.md @@ -1,15 +1,15 @@ --- -title: Deploy firmware container using containerd +title: Deploy firmware container using `containerd` weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Containerd is a cloud-native container runtime used to deploy workloads across an array of platforms. +`Containerd` is a cloud-native container runtime used to deploy workloads across an array of platforms. ## Hello World example -Now that the container image has been pulled, you can try few of the commands from the OCI specification. The expected output is shown after each command. +Now that the container image has been pulled, you can try a few of the commands from the OCI specification. The expected output is shown after each command. Create and start the container: ```bash @@ -34,11 +34,11 @@ ctr t ls TASK PID STATUS test 808 RUNNING ``` -The output from the hello-world application running on the Cortex-M can be seen in the AVH GUI by selecting “Cortex-M Console” +The output from the hello-world application running on the Cortex-M can be seen in the AVH GUI by selecting “Cortex-M Console”: ![Cortex-M output alt-text#center](containerd1.png "Figure 1. Cortex-M output") -Check the container info: +Check the container info using the following command: ```bash ctr c info test ``` @@ -98,7 +98,7 @@ Stop the container: ```console ctr t kill test ``` -Check that the container was stopped: +Check that the container has stopped: ```console ctr t ls ``` @@ -106,7 +106,7 @@ ctr t ls TASK PID STATUS test 808 STOPPED ``` -Delete the container: +Now, delete the container: ```console ctr c rm test ``` @@ -115,20 +115,20 @@ ctr c rm test The SMARTER project offers an additional pre-built container image, available on GitHub for the i.MX8M-PLUS-EVK AVH model. The container image contains a FreeRTOS application built using the NXP SDK. It outputs a timestamp to the serial console output of the board. The application also sends the output to the `cortexm_console` helper application running under Linux on the board. -You can pull the pre-built image onto the AVH model using: +You can pull the pre-built image onto the AVH model using the following: ```console ctr image pull ghcr.io/smarter-project/smart-camera-hybrid-application/hybrid_app_imx8mp:latest ``` -Create and run the container: +Now, create and run the container: ```console ctr run --runtime io.containerd.hybrid ghcr.io/smarter-project/smart-camera-hybrid-application/hybrid_app_imx8mp:latest test2 ``` -The Cortex-M Console output will now show as below: +The Cortex-M Console output will now appear as per below: ![Cortex-M output alt-text#center](containerd2.png "Figure 2. Cortex-M output") -The output from the Cortex-M is also available under `/var/lib/hybrid-runtime//.log` +The output from the Cortex-M is also available under `/var/lib/hybrid-runtime//.log`: ```output cat /var/lib/hybrid-runtime/test2/test2.log Timestamp: 0 @@ -143,7 +143,7 @@ Timestamp: 8 Timestamp: 9 Timestamp: 10 ``` -When the container is deleted, the log file will also be removed as shown: +When the container is deleted, the log file will also be removed as shown below: ```console ctr t kill test2 ctr c rm test2 diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd1.png b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd1.png index 7452f3d8d..8dbecb972 100644 Binary files a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd1.png and b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd1.png differ diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd2.png b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd2.png index 260a68163..0696b2041 100644 Binary files a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd2.png and b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/containerd2.png differ diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/k3s.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/k3s.md index 1253cd9e1..ad1c1bdf9 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/k3s.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/k3s.md @@ -18,7 +18,7 @@ curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=$INSTALL_K3S_EXEC sh -s - ``` This can take a few minutes to complete. -Make sure K3s is running. A snippet of the expected output is shown below. +Make sure K3s is running. A snippet of the expected output is shown below: ```bash systemctl status k3s ``` @@ -33,28 +33,28 @@ systemctl status k3s CGroup: /system.slice/k3s.service `-2069 "/usr/local/bin/k3s agent" ``` -For things to work properly, you need to make K3s aware of the `hybrid-runtime` by configuring this with `containerd`. You find the config file with the K3s example YAML files in GitHub. +For things to work properly, you need to make K3s aware of the `hybrid-runtime` by configuring this with `containerd`. You can find the config file with the K3s example YAML files in GitHub. Download the K3s demo example YAML files: ```bash wget https://github.com/smarter-project/hybrid-runtime/releases/download/v1.5/example.tar.gz ``` -Extract the files. +Extract the files: ```bash tar -xvf example.tar.gz ``` -Create a `containerd` directory under `/etc` and copy the config file there. +Create a `containerd` directory under `/etc` and copy the config file to there: ```bash mkdir /etc/containerd mv example/config.toml /etc/containerd/ ``` -Restart containerd, and make sure that it's running. +Restart `containerd`, and make sure that it's running: ```bash systemctl restart containerd systemctl status containerd ``` -If you run the `kubectl` command below, you will see that the node is not ready. +If you run the `kubectl` command below, you will see that the node is not ready: ```bash kubectl get nodes ``` @@ -62,12 +62,12 @@ kubectl get nodes NAME STATUS ROLES AGE VERSION narsil NotReady control-plane,master 18m v1.29.6+k3s2 ``` -To fix this, you need to apply a Container Network Interface (CNI). Run the command to use the "smarter_cni", and label the node. +To fix this, you need to apply a Container Network Interface (CNI). Run the command to use the `smarter_cni`, and label the node as follows: ```console kubectl apply -f example/smarter_cni.yaml kubectl label node narsil smarter.cni=deploy ``` -Re-run the kubectl command. +Re-run the kubectl command: ```bash kubectl get nodes ``` @@ -84,7 +84,7 @@ narsil Ready control-plane,master 24m v1.29.6+k3s2 The smarter camera demo is used as an example to show the capabilities of K3s. -First, you need to set a `runtimeClass` in K3s. It allows you to select the container runtime we want to use. +First, you need to set a `runtimeClass` in K3s. It allows you to select the container runtime we want to use: ```bash kubectl apply -f example/runtime_class.yaml ``` @@ -108,8 +108,8 @@ spec: image: ghcr.io/smarter-project/smart-camera-hybrid-application/hybrid_app_imx8mp:latest imagePullPolicy: IfNotPresent ``` -There are two ways to check that the firmware is running. -1. Run and observe the output. +There are two ways to check that the firmware is running: +1. Run and observe the output (a pod with the name `example3` should be running). ```bash kubectl get pods -A ``` @@ -119,30 +119,30 @@ NAMESPACE NAME READY STATUS RESTARTS AGE default example3 1/1 Running 0 6m57s kube-system smarter-cni-wplzn 1/1 Running 3 (141m ago) 4h29m ``` -A pod with the name `example3` should be running. 2. Check the Cortex-M Console. If there is any output here, the firmware is running. ### Stop the demo -To stop the demo, run the command shown below. The termination process can take a few minutes. +To stop the demo, run the command shown below (the termination process can take a few minutes): ```bash kubectl delete pod example3 --grace-period=0 --force ``` -Make sure the pod was terminated. -1. Go to the Cortex-M Console and check that there are no new outputs. +To make sure the pod was terminated, check the following: + +1. Go to the Cortex-M Console and check that there are no new outputs: ![Cortex-M output alt-text#center](k3s.png "Figure 1. Cortex-M output") -2. Check that the firmware is offline. +2. Check that the firmware is offline: ```bash cat /sys/class/remoteproc/remoteproc0/state ``` -Output: +The output should be as follows: ```output offline ``` -3. Make sure the created pod above was deleted. +3. Make sure the created pod above was deleted: ```bash kubectl get pods -A ``` @@ -150,13 +150,13 @@ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system smarter-cni-wplzn 1/1 Running 3 (143m ago) 4h31m ``` -4. Make sure all the container resources were deleted. The command below should give no output. +4. Make sure all the container resources were deleted. The command below should give no output: ```console ls /var/lib/hybrid-runtime/ ``` ## Summary -The hybrid-runtime can be used to improve the experience for systems with multiple IPs on a single SoC. You now know how to use cloud tools such as K3s and containerd to deploy and run workloads on hybrid systems using the hybrid-runtime. +The hybrid-runtime can be used to improve the experience for systems with multiple IPs on a single SoC. You now know how to use cloud tools such as K3s and `containerd` to deploy and run workloads on hybrid systems using the hybrid-runtime. If you have an Arm Linux host, you can run the hello world example by following the instructions in the next section. diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/motivation.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/motivation.md index 489ecab19..694bdc736 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/motivation.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/motivation.md @@ -6,16 +6,16 @@ weight: 2 layout: learningpathall --- -## What is the hybrid-runtime? -The hybrid runtime enables the deployment of software onto other processors, such as Cortex-M/Cortex-R-based CPUs. The deployment is made possible by using cloud-native technologies in the system, with Linux running on the Cortex-A application cores. +## What is hybrid-runtime? +Hybrid-runtime enables the deployment of software onto other processors, such as Cortex-M or Cortex-R-based CPUs. The deployment is made possible by using cloud-native technologies in the system, with Linux running on the Cortex-A application cores. ## Example use cases Here are some examples where this functionality can be useful: -- System-wide firmware updates made easy, secure and controllable at scale. The hybrid runtime makes it possible to update what's running on the Cortex-M on-demand. For example to address bugs, security vulnerabilities, performance, or to update functionality. -- Partitioning applications over multiple IPs. It enables applications to be divided into different services, where each service runs on a different core, depending on its requirements. In a scenario where you want to preserve energy, one part can run on a Cortex-M while the main CPU is asleep. Then, once an event is detected, Cortex-A is woken up and can start running its part of the application. The parts can then be deployed and managed in uniform way. -- Taking full advantage of our system's capabilities. This is useful in a system with boards that are idle for a majority of the time. By making it easy to access them, you can leverage existing cores on every edge node. +- System-wide firmware updates made easy, secure and controllable at scale. The hybrid runtime makes it possible to update what's running on the Cortex-M on-demand, for example, to address bugs, security vulnerabilities, performance, or to update functionality. +- Partitioning applications over multiple IPs. It enables applications to be divided into different services, where each service runs on a different core, depending on its requirements. In a scenario where you want to preserve energy, one part can run on a Cortex-M while the main CPU is asleep. Then, once an event is detected, Cortex-A is woken up and can start running its part of the application. The parts can then be deployed and managed in a uniform way. +- Taking full advantage of the system's capabilities. This is useful in a system with boards that are idle for a majority of the time. By making it easy to access them, you can leverage existing cores on every edge node. -More details on the runtime can be found on the [hybrid runtime GitHub page](https://github.com/smarter-project/hybrid-runtime.git). +More details on the runtime can be found on the [hybrid-runtime GitHub page](https://github.com/smarter-project/hybrid-runtime.git). diff --git a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/runtime-overview.md b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/runtime-overview.md index 4fad435c3..fc2024efa 100644 --- a/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/runtime-overview.md +++ b/content/learning-paths/embedded-systems/Cloud-Native-deployment-on-hybrid-edge-systems/runtime-overview.md @@ -10,18 +10,18 @@ layout: learningpathall The `hybrid-runtime` is a low-level Open Container Initiative (OCI) compatible runtime. It's written in Rust. There are three components that make up the runtime (shown as green in the figure below): -1. The **Command Line interface (CLI)**, which follows the OCI specification requirements. The runtime provides an executable that supports an array of commands, using the following template +1. The **Command Line interface (CLI)**, which follows the OCI specification requirements. The runtime provides an executable that supports an array of commands, using the following template: ```console $ runtime [global-options] [command-specific-options] ``` -The list commands as of the v1.2 [specification](https://github.com/opencontainers/runtime-spec): *create*, *start*, *run*, *delete*, *kill*, *state*, *logs* +The list of commands are as per the v1.2 [specification](https://github.com/opencontainers/runtime-spec): `create`, `start`, `run`, `delete`, `kill`, `state`, `logs` 2. The **hybrid-runtime** provides the core functionality of each of the previously mentioned commands. 3. The **hybrid-shim** is a lightweight component that sits between the `hybrid-runtime` and `containerd`. It helps facilitate communication between both, handling tasks such as the container process management and keeping track of the status of the container. -You can review the runtime high-level architecture. +You can review the runtime high-level architecture in the image below: ![hybrid runtime alt-text#center](hybrid.jpg "Figure 1. Hybrid runtime high-level architecture") -Now that you have an understanding of the hybrid-runtime, let's move on to the deployment part. \ No newline at end of file +Now that you have an understanding of the hybrid-runtime, let's move on to the deployment part. diff --git a/content/learning-paths/embedded-systems/docker/dockerfile.md b/content/learning-paths/embedded-systems/docker/dockerfile.md index 3eec92345..ee5f8cb2f 100644 --- a/content/learning-paths/embedded-systems/docker/dockerfile.md +++ b/content/learning-paths/embedded-systems/docker/dockerfile.md @@ -38,7 +38,7 @@ This file copies the installers to the Docker image. The exact filename(s) will Edit the Dockerfile as necessary (`ACfE` and `FVP` arguments), else edit on the build command line (see later). -Whilst installing the [compiler](/install-guides/armclang/) and [FVP library](/install-guides/fm_fvp/fvp/), the EULA(s) are silently accepted. Be sure that this is satisfactory for you. +While installing the [compiler](/install-guides/armclang/) and [FVP library](/install-guides/fm_fvp/fvp/), the EULA(s) are silently accepted. Be sure that this is satisfactory for you. You will need to edit the licensing portion of the file to match your internal license setup. See [Arm User-Based Licenses](/install-guides/license/) for more information. diff --git a/content/learning-paths/embedded-systems/intro/_index.md b/content/learning-paths/embedded-systems/intro/_index.md index a12ecc994..cf65eef22 100644 --- a/content/learning-paths/embedded-systems/intro/_index.md +++ b/content/learning-paths/embedded-systems/intro/_index.md @@ -1,7 +1,7 @@ --- title: Get started with Embedded Systems -minutes_to_complete: 10 +minutes_to_complete: 15 who_is_this_for: This is an introductory topic for software developers working on embedded systems and new to the Arm architecture. diff --git a/content/learning-paths/embedded-systems/universal-sbc-chassis/_index.md b/content/learning-paths/embedded-systems/universal-sbc-chassis/_index.md index bb8072bfe..4775d3658 100644 --- a/content/learning-paths/embedded-systems/universal-sbc-chassis/_index.md +++ b/content/learning-paths/embedded-systems/universal-sbc-chassis/_index.md @@ -6,9 +6,9 @@ who_is_this_for: This is an introductory topic for software developers and hobby minutes_to_complete: 120 learning_objectives: - - Acquire and print the required materials - - Assemble and install the universal SBC rack mount system in a 4U chassis - - Install single board computers in the racks + - Acquire and print the required materials. + - Assemble and install the universal SBC rack mount system in a 4U chassis. + - Install single board computers in the racks. prerequisites: - 3D printer diff --git a/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md b/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md index 88cfb339a..8d2aea5ae 100644 --- a/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md +++ b/content/learning-paths/laptops-and-desktops/win_arm64ec/_index.md @@ -1,7 +1,7 @@ --- title: Use Arm64EC with Windows 11 on Arm -minutes_to_complete: 20 +minutes_to_complete: 30 who_is_this_for: This is an introductory topic for software developers who want to use Arm64EC with Windows on Arm devices. diff --git a/content/learning-paths/laptops-and-desktops/windows_cicd_github/_index.md b/content/learning-paths/laptops-and-desktops/windows_cicd_github/_index.md index 647ee0091..ba121fa63 100644 --- a/content/learning-paths/laptops-and-desktops/windows_cicd_github/_index.md +++ b/content/learning-paths/laptops-and-desktops/windows_cicd_github/_index.md @@ -9,7 +9,7 @@ who_is_this_for: This is an introductory topic for software developers intereste learning_objectives: - Setup a CI/CD flow with GitHub Actions to use Windows on Arm as the self-hosted runner host - - Run a simple workflow + - Run a simple GitHub Actions workflow prerequisites: - Some familiarity with CI/CD concepts is assumed diff --git a/content/learning-paths/microcontrollers/avh_vio/_index.md b/content/learning-paths/microcontrollers/avh_vio/_index.md index 2cd8b2039..4fa0d0689 100644 --- a/content/learning-paths/microcontrollers/avh_vio/_index.md +++ b/content/learning-paths/microcontrollers/avh_vio/_index.md @@ -1,7 +1,7 @@ --- title: Implement an example Virtual Peripheral with Arm Virtual Hardware -minutes_to_complete: 15 +minutes_to_complete: 20 who_is_this_for: This is an introductory topic for software developers new to Arm Virtual Hardware and its features. diff --git a/content/learning-paths/microcontrollers/intro/_index.md b/content/learning-paths/microcontrollers/intro/_index.md index 16dd1a0f9..af7107f70 100644 --- a/content/learning-paths/microcontrollers/intro/_index.md +++ b/content/learning-paths/microcontrollers/intro/_index.md @@ -6,8 +6,8 @@ minutes_to_complete: 10 who_is_this_for: This is an introductory topic for software developers working on microcontroller applications and new to the Arm architecture. learning_objectives: - - Understand where the Arm architecture is used in microcontrollers - - Find microcontroller hardware to use for software development + - Understand where the Arm architecture is used in microcontrollers. + - Find microcontroller hardware to use for software development. prerequisites: - None diff --git a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/_index.md b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/_index.md index aaf7c2a88..04494f7a4 100644 --- a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/_index.md @@ -1,6 +1,6 @@ --- title: Code level Performance Analysis using the PMUv3 plugin -draft: true +draft: false minutes_to_complete: 60 who_is_this_for: Engineers who want to do C/C++ performance analysis by instrumenting code at the block level. diff --git a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md index 1aff21b19..78f5346e0 100644 --- a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md +++ b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md @@ -64,7 +64,13 @@ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git The Linux kernel repository is large so it will take some time to download. -When the download is complete, build the Perf libraries, `libperf.a` and `libapi.a`: +Install the GNU compiler. If you are running on Ubuntu you can run: + +```console +sudo apt install build-essential -y +``` + +When the Linux source download is complete, build the Perf libraries, `libperf.a` and `libapi.a`: ```console pushd linux/tools/lib/perf diff --git a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation.md b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation.md index f5ff7ff76..96856b087 100644 --- a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation.md +++ b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation.md @@ -15,7 +15,7 @@ So far you have the Linux kernel source tree and the PMUv3 plugin source code. Next, create a third directory to learn how to integrate the PMUv3 plugin into an application. ```console -mkdir test +cd ../ ; mkdir test ; cd test ``` The instructions assume you have all three directories in parallel. If you have a different directory structure you may need to adjust the build commands to find the header files and libraries. @@ -52,7 +52,7 @@ As an example, use a text editor to create a file `test1.c` in the `test` direct #include "pmuv3_plugin_bundle.h" #include "processing.h" -#define VECTOR_SIZE 100 +#define VECTOR_SIZE 10000 void initialize_vectors(double vector_a[], double vector_b[], int size) { for (int i = 0; i < size; i++) { diff --git a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation2.md b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation2.md index 95d7d6af0..597c4211f 100644 --- a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation2.md +++ b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/instrumentation2.md @@ -31,7 +31,7 @@ You can repeat for additional segments, but getting the next segment number and The example below collects separate for the `initialize_vectors()` function and the `calculate_result()` functions instead of collecting the data for both of them as in the previous example. -Use a text editor to create a file test2.c in the test directory with the contents below. +Use a text editor to create a file `test2.c` in the test directory with the contents below. ```C #include @@ -41,7 +41,7 @@ Use a text editor to create a file test2.c in the test directory with the conten #include "pmuv3_plugin_bundle.h" #include "processing.h" -#define VECTOR_SIZE 100 +#define VECTOR_SIZE 10000 void initialize_vectors(double vector_a[], double vector_b[], int size) { for (int i = 0; i < size; i++) { diff --git a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/plot.md b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/plot.md index 3cb5fb405..ff997a539 100644 --- a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/plot.md +++ b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/plot.md @@ -18,10 +18,17 @@ If you are running on Ubuntu, install the following packages: sudo apt install python-is-python3 python3-pip python3-venv -y ``` +Create and activate a Python virtual environment: + +```console +python3 -m venv venv +source venv/bin/activate +``` + Next, use Pip to install the required packages: ```console - pip install pandas pyyaml matplotlib PyPDF2 +pip install pandas pyyaml matplotlib PyPDF2 ``` Download the Python application code to plot and analyze results: @@ -30,8 +37,6 @@ Download the Python application code to plot and analyze results: git clone https://github.com/GayathriNarayana19/Performance_Analysis_Backend.git ``` -Navigate to your `test/` directory and use a text editor to create the required configuration file. - Copy the code below into a file named `config.yaml` in your `test/` directory which contains your CSV files. ```yaml diff --git a/content/learning-paths/servers-and-cloud-computing/csp/_review.md b/content/learning-paths/servers-and-cloud-computing/csp/_review.md index 8dcec9be2..da566a617 100644 --- a/content/learning-paths/servers-and-cloud-computing/csp/_review.md +++ b/content/learning-paths/servers-and-cloud-computing/csp/_review.md @@ -24,7 +24,7 @@ review: - questions: question: > - What is the name of the Arm based processor that powers GCP? + What is the name of the Arm based processor that powers AWS? answers: - "Axion" - "Graviton" diff --git a/content/learning-paths/servers-and-cloud-computing/csp/google.md b/content/learning-paths/servers-and-cloud-computing/csp/google.md index 3c7da36a2..282fea34a 100644 --- a/content/learning-paths/servers-and-cloud-computing/csp/google.md +++ b/content/learning-paths/servers-and-cloud-computing/csp/google.md @@ -11,7 +11,7 @@ layout: "learningpathall" As with most cloud service providers, Google Cloud offers a pay-as-you-use [pricing policy](https://cloud.google.com/pricing), including a number of [free](https://cloud.google.com/free/docs/free-cloud-features) services. -This section is to help you get started with [Google Cloud Compute Engine](https://cloud.google.com/compute) compute services, using Arm-based virtual machines. Google Cloud offers two generations of Arm-based VMs, the latest one - C4A - powered by Google Axion processors (in public preview) and previous generation - T2A - powered by Ampere Altra processors (generally available). These VMs are general-purpose compute VMs, essentially your own personal computer in the cloud. +This section is to help you get started with [Google Cloud Compute Engine](https://cloud.google.com/compute) compute services, using Arm-based [Tau T2A](https://cloud.google.com/tau-vm) Virtual Machines. This is a general-purpose compute platform, essentially your own personal computer in the cloud. Detailed instructions are available in the Google Cloud [documentation](https://cloud.google.com/compute/docs/instances). @@ -23,7 +23,7 @@ If using an organization's account, you will likely need to consult with your in ## Browse for an appropriate instance -Google Cloud offers a wide range of instance types, covering all performance (and pricing) points. For an overview of the Tau T2A instance types, see the [General-purpose machine family](https://cloud.google.com/compute/docs/general-purpose-machines#t2a_machines) overview. The C4A instances are in public preview and can be requested by following this link. +Google Cloud offers a wide range of instance types, covering all performance (and pricing) points. For an overview of the Tau T2A instance types, see the [General-purpose machine family](https://cloud.google.com/compute/docs/general-purpose-machines#t2a_machines) overview. Also note which [regions](https://cloud.google.com/compute/docs/regions-zones#available) these servers are available in. @@ -55,11 +55,9 @@ To view the latest information on which available regions and zones support Arm- ### Machine configuration -Note: If you have signed up for public preview and have access to `C4A` instances, you should be able to see those VMs in the list. +Select `T2A` from the `Series` pull-down menu. Then select an appropriate `Machine type` configuration for your needs. -Select `C4A` from the `Series` pull-down menu. Then select an appropriate `Machine type` configuration for your needs. - -![alt-text #center](images/gcp_instance_new.png "Select an appropriate C4A machine type") +![google5 #center](images/gcp_instance.png "Select an appropriate T2A machine type") ### Boot disk configuration diff --git a/content/learning-paths/servers-and-cloud-computing/gcp/jump_server.md b/content/learning-paths/servers-and-cloud-computing/gcp/jump_server.md index 9c67fa700..27c8c3604 100644 --- a/content/learning-paths/servers-and-cloud-computing/gcp/jump_server.md +++ b/content/learning-paths/servers-and-cloud-computing/gcp/jump_server.md @@ -69,7 +69,7 @@ resource "google_project_iam_member" "project" { resource "google_compute_instance" "bastion_host" { project = var.project name = "bastion-vm" - machine_type = "c4a-standard-1" + machine_type = "t2a-standard-1" zone = var.zone tags = ["public"] boot_disk { @@ -91,7 +91,7 @@ resource "google_compute_instance" "bastion_host" { resource "google_compute_instance" "private" { project = var.project name = "bastion-private" - machine_type = "c4a-standard-1" + machine_type = "t2a-standard-1" zone = var.zone allow_stopping_for_update = true tags = ["private"] diff --git a/content/learning-paths/servers-and-cloud-computing/gcp/terraform.md b/content/learning-paths/servers-and-cloud-computing/gcp/terraform.md index 8e14a0727..63a462294 100644 --- a/content/learning-paths/servers-and-cloud-computing/gcp/terraform.md +++ b/content/learning-paths/servers-and-cloud-computing/gcp/terraform.md @@ -39,7 +39,7 @@ provider "google" { resource "google_compute_instance" "vm_instance" { name = "instance-arm" - machine_type = "c4a-standard-2" + machine_type = "t2a-standard-1" boot_disk { initialize_params { diff --git a/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/_index.md b/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/_index.md index bf965b2da..4188422e3 100644 --- a/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/_index.md @@ -1,12 +1,12 @@ --- -title: Learn how to migrate an x86 application to multi-architecture with Arm-based Google Axion processors on GKE +title: Learn how to migrate an x86 application to multi-architecture with Arm on Google Kubernetes Engine (GKE) minutes_to_complete: 30 who_is_this_for: This is an advanced topic for software developers who are looking to migrate their existing x86 containerized applications to Arm learning_objectives: - - Add Arm-based (Google Axion) nodes to an existing x86-based GKE cluster + - Add Arm-based nodes to an existing x86-based GKE cluster - Rebuild an x86-based application to make it multi-arch and run on Arm - Learn how to add taints and tolerations to GKE clusters to schedule application pods on architecture specific nodes - Run a multi-arch application across multiple architectures on a single GKE cluster diff --git a/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/how-to-1.md b/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/how-to-1.md index d5b865fc5..adff9915d 100644 --- a/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/how-to-1.md +++ b/content/learning-paths/servers-and-cloud-computing/gke-multi-arch/how-to-1.md @@ -8,7 +8,7 @@ layout: learningpathall ## Migrate an existing x86-based application to run on Arm-based nodes in a single GKE cluster -Google Kubernetes Engine (GKE) supports hybrid clusters with x86 and Arm based nodes. The Arm-based nodes can be deployed on the `C4A` family of virtual machines. The `C4A` virtual machines are powered by Google Axion processors, based on Arm Neoverse V2. +Google Kubernetes Engine (GKE) supports hybrid clusters with x86 and Arm based nodes. The Arm-based nodes can be deployed on the `Tau T2A` family of virtual machines. The `Tau T2A` virtual machines are powered by Ampere Altra Arm-based processors. ## Before you begin @@ -102,13 +102,13 @@ Hello from NODE:gke-multi-arch-cluster-default-pool-45537239-q83v, POD:x86-hello ## Add Arm-based nodes to your GKE cluster -Use the following command to add an Arm-based node pool with VM type `c4a-standard-2` to your GKE cluster: +Use the following command to add an Arm-based node pool with VM type `t2a-standard-2` to your GKE cluster: ```console gcloud container node-pools create arm-pool \ --cluster $CLUSTER_NAME \ --zone $ZONE \ - --machine-type=c4a-standard-2 \ + --machine-type=t2a-standard-2 \ --num-nodes=3 ``` After the Arm-nodes are successfully added to the cluster, run the following command to check if both types of nodes show up in the cluster: diff --git a/content/learning-paths/servers-and-cloud-computing/java-on-axion/1-create-instance.md b/content/learning-paths/servers-and-cloud-computing/java-on-axion/1-create-instance.md index b1ef26459..7d4604d0f 100644 --- a/content/learning-paths/servers-and-cloud-computing/java-on-axion/1-create-instance.md +++ b/content/learning-paths/servers-and-cloud-computing/java-on-axion/1-create-instance.md @@ -1,5 +1,5 @@ --- -title: Create Arm-based VM instance with Google Axion CPU +title: Create an Arm-based VM instance with Google Axion CPU weight: 2 ### FIXED, DO NOT MODIFY @@ -8,7 +8,7 @@ layout: learningpathall ## Create an Axion instance -Axion is Google's first Arm-based CPU, designed with the Armv9 Neoverse N2 architecture. Created specifically for the data center, Axion delivers industry-leading performance and energy efficiency. +Axion is Google’s first Arm-based server processor, built using the Armv9 Neoverse V2 CPU. Created specifically for the data center, Axion delivers industry-leading performance and energy efficiency. {{% notice Note %}} The Axion instance type (C4A) is currently in public preview. A GA (General Availability) release will happen in the coming months. @@ -22,7 +22,7 @@ If you have never used the Google Cloud Platform before, please review [Getting #### Open and configure the Google Cloud Shell Editor -The Cloud Shell Editor has the gcloud CLI pre-installed, and is the quickest way to access a terminal with the gcloud CLI. It can be found at [shell.cloud.google.com](https://shell.cloud.google.com/). +The gcloud CLI is pre-installed in Cloud Shell Editor, which is the quickest way to access a terminal with the gcloud CLI. It can be found at [shell.cloud.google.com](https://shell.cloud.google.com/). Once the shell is available, configure it to use your Google Cloud project ID: diff --git a/content/learning-paths/servers-and-cloud-computing/java-on-axion/2-deploy-java.md b/content/learning-paths/servers-and-cloud-computing/java-on-axion/2-deploy-java.md index 2bfb1c5aa..4dc7f2b40 100644 --- a/content/learning-paths/servers-and-cloud-computing/java-on-axion/2-deploy-java.md +++ b/content/learning-paths/servers-and-cloud-computing/java-on-axion/2-deploy-java.md @@ -14,7 +14,7 @@ Now that you have an Axion instance running Ubuntu 24.04, you can SSH into it vi This will bring up a separate window with a shell connected to your instance. -Java is not yet installed on this Ubuntu image, so you'll want to install Java. First update `apt`: +This Ubuntu image does not include Java, so you need to install it. First update `apt`: ```bash sudo apt update @@ -33,7 +33,7 @@ Check to ensure that the JRE is properly installed: java -version ``` -The output will look like this: +Your output will look like this: ```bash openjdk version "21.0.3" 2024-04-16 @@ -93,4 +93,4 @@ Once the application is running, you can open the web app in a web browser by vi http://[EXTERNAL IP]:8080 ``` -Where `[EXTERNAL IP]` is the value you obtained in the [last section](../1-create-instance/#obtain-the-ip-of-your-instance). \ No newline at end of file +Where `[EXTERNAL IP]` is the value you obtained in the [last section](../1-create-instance/#obtain-the-ip-of-your-instance). diff --git a/content/learning-paths/servers-and-cloud-computing/java-on-axion/3-test-optimization-flags.md b/content/learning-paths/servers-and-cloud-computing/java-on-axion/3-test-optimization-flags.md index 1f202d708..53bc2c866 100644 --- a/content/learning-paths/servers-and-cloud-computing/java-on-axion/3-test-optimization-flags.md +++ b/content/learning-paths/servers-and-cloud-computing/java-on-axion/3-test-optimization-flags.md @@ -12,7 +12,7 @@ Now that you've built and deployed the Spring Petclinic application, you can use ## Run performance tests with jmeter -The spring-petclinic repo comes with a jmx file that can be used by the jmeter application to test spring-petclinic performance. +The spring-petclinic repo includes a jmx file that you can use with the jmeter application to test spring-petclinic performance. To install jmeter, first open a new ssh terminal to your instance (so that you don't interrupt the running spring-petclinic application in your existing terminal window) and run: @@ -23,7 +23,7 @@ sudo mv apache-jmeter-5.6.3 /opt/jmeter sudo ln -s /opt/jmeter/bin/jmeter /usr/local/bin/jmeter ``` -to test that jmeter was installed correctly, run +To test that jmeter was installed correctly, run ```bash jmeter --version @@ -49,17 +49,16 @@ This file will contain tens of thousands of rows of results, but you can parse i jmeter -g results1.jtl -o ./summary_report1 ``` -This command will create an output directory called `summary_report1`, which will contain a file called `statistics.json` with summary statistics. - +This command creates an output directory called `summary_report1`, which contains a file called `statistics.json` with summary statistics. ### Best practices for optimizing your Java application -There are a large number of Java flags that can alter runtime performance of your applications. Here are some examples: +Many Java flags can alter runtime performance of your applications. Here are some examples: 1. `-XX:-TieredCompilation`: This flag turns off intermediate compilation tiers. This can help if you've got a long-running applications that have predictable workloads, and/or you've observed that the warmup period doesn't significantly impact overall performance. 2. `-XX:ReservedCodeCacheSize` and `-XX:InitialCodeCacheSize`: You can increase these values if you see warnings about code cache overflow in your logs. You can decrease these values if you're in a memory constrained environment, or your application doesn't use much compiled code. The only way to determine optimal values for your application is to test. -Your Petclinic application is a good candidate for the `-XX:-TieredCompilation` flag, because it is long-running and has predictable workloads. To test this, stop the Petclinic application and re-run the jar with +Your Petclinic application is a good candidate for the `-XX:-TieredCompilation` flag because it is long-running and has predictable workloads. To test this, stop the Petclinic application and re-run the jar with ```bash java -XX:-TieredCompilation -jar target/*.jar @@ -79,7 +78,7 @@ jmeter -g results2.jtl -o ./summary_report2 And then compare the contents of `summary_report1/statistics.json` to the contents of `summary_report2/statistics.json`. In the `Total` data structure you'll notice that the average response time (`meanResTime`) will be approximately 15% lower for the new run! -To list and explore all of the available tuning flags for your JVM, run +Run the following command to list and explore all of the available tuning flags for your JVM: ```bash java -XX:+PrintFlagsFinal -version @@ -95,4 +94,4 @@ Some very useful Arm-specific flags are: * `UseNeon`: When true, this enables the use of an advanced single instruction multiple data (SIMD) architecture extension that vastly improves use cases such as multimedia encoding/decoding, user interface, 2D/3D graphics and gaming. * `UseSVE`: Enables Scalable Vector Extensions, which improves vector operation performance. -The performance tuning parameters are dependent on the application workload and its implementation. Most of the Java based workloads can be migrated to Axion with little to no changes required. \ No newline at end of file +The performance tuning parameters are dependent on the application workload and its implementation. Most of the Java based workloads can be migrated to Axion with little to no changes required. diff --git a/content/learning-paths/servers-and-cloud-computing/java-on-axion/_index.md b/content/learning-paths/servers-and-cloud-computing/java-on-axion/_index.md index dd19b9f6e..76e9aab30 100644 --- a/content/learning-paths/servers-and-cloud-computing/java-on-axion/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/java-on-axion/_index.md @@ -1,18 +1,18 @@ --- -title: Running Java applications on Google Axion processors +title: Run Java applications on Google Axion processors draft: true minutes_to_complete: 20 who_is_this_for: This is an introductory topic for software developers who want to learn how to run their Java-based applications on Arm-based Google Axion processors in Google Cloud. Most Java applications will run on Axion with no changes needed, but there are optimizations that can help improve application performance on Axion. learning_objectives: - - Create an Arm-based VM instance with Axion + - Create an Arm-based VM instance with Google Axion CPU - Deploy a Java application on Axion - Understand Arm performance for different JDK versions - Test common performance optimization flags prerequisites: - - A [Google Cloud](https://cloud.google.com/) account with access to Axion based instances(C4A). + - A [Google Cloud](https://cloud.google.com/) account with access to Axion based instances (C4A). author_primary: Joe Stech diff --git a/content/learning-paths/servers-and-cloud-computing/java-on-axion/_review.md b/content/learning-paths/servers-and-cloud-computing/java-on-axion/_review.md index 05f1cf187..a6f535674 100644 --- a/content/learning-paths/servers-and-cloud-computing/java-on-axion/_review.md +++ b/content/learning-paths/servers-and-cloud-computing/java-on-axion/_review.md @@ -20,6 +20,19 @@ review: correct_answer: 2 explanation: > Turning off tiered compilation is best for long running applications with predictable workloads. + - questions: + question: > + What is the purpose of running the following command when creating an Axion instance on Google Cloud Platform? + + gcloud compute instances create test-app-instance --image-family=ubuntu-2404-lts-arm64 --image-project=ubuntu-os-cloud --machine-type=c4a-standard-2 --scopes userinfo-email,cloud-platform --zone [YOUR ZONE] --tags http-server + answers: + - To configure the Google Cloud project ID. + - To create a new virtual machine instance with specific configurations. + - To configure firewall rules to allow HTTP traffic on port 8080. + - To obtain the external IP address of the virtual machine instance. + correct_answer: 2 + explanation: > + This is a gcloud CLI command to create a new Ubuntu virtual machine instance. diff --git a/content/learning-paths/servers-and-cloud-computing/mysql/_index.md b/content/learning-paths/servers-and-cloud-computing/mysql/_index.md index 35a532d19..0b757409f 100644 --- a/content/learning-paths/servers-and-cloud-computing/mysql/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/mysql/_index.md @@ -17,6 +17,7 @@ author_primary: Jason Andrews ### Tags skilllevels: Introductory subjects: Databases +cloud_service_providers: Google Cloud armips: - Neoverse operatingsystems: diff --git a/content/learning-paths/servers-and-cloud-computing/processwatch/before-you-begin.md b/content/learning-paths/servers-and-cloud-computing/processwatch/before-you-begin.md index 3f0507f08..258307d65 100644 --- a/content/learning-paths/servers-and-cloud-computing/processwatch/before-you-begin.md +++ b/content/learning-paths/servers-and-cloud-computing/processwatch/before-you-begin.md @@ -18,7 +18,7 @@ You need to install some linux packages before you start building Process Watch To install these dependencies on an Ubuntu 20.04 and later machine, run: ```console sudo apt-get update -sudo apt-get install libelf-dev cmake clang llvm llvm-dev +sudo apt-get install libelf-dev cmake clang llvm llvm-dev -y ``` ## Clone Process Watch Repository diff --git a/content/learning-paths/servers-and-cloud-computing/processwatch/running-processwatch.md b/content/learning-paths/servers-and-cloud-computing/processwatch/running-processwatch.md index 6e2fc6022..f2f530e2f 100644 --- a/content/learning-paths/servers-and-cloud-computing/processwatch/running-processwatch.md +++ b/content/learning-paths/servers-and-cloud-computing/processwatch/running-processwatch.md @@ -51,7 +51,7 @@ By default, Process Watch: ## Default Process Watch output You can run Process Watch with no arguments: -```output +```bash sudo ./processwatch ``` diff --git a/content/learning-paths/servers-and-cloud-computing/redis/_index.md b/content/learning-paths/servers-and-cloud-computing/redis/_index.md index 84bc62c5a..625bd30ed 100644 --- a/content/learning-paths/servers-and-cloud-computing/redis/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/redis/_index.md @@ -18,6 +18,7 @@ author_primary: Elham Harirpoush ### Tags skilllevels: Introductory subjects: Databases +cloud_service_providers: Google Cloud armips: - Neoverse operatingsystems: diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/1-ray-tracing.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/1-ray-tracing.md index 68f049029..1c6b7447d 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/1-ray-tracing.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/1-ray-tracing.md @@ -8,11 +8,11 @@ layout: learningpathall ## Overview -In 2023, Arm developed a demo to showcase the new frontier of next-generation graphics technologies of the **Immortalis** GPU via the Unreal Lumen rendering system. If you are not familiar with Lumen and global illumination, please go through this [learning path](/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/) before proceeding. +In 2023, Arm developed a demo to showcase the new frontier of next-generation graphics technologies of the **Immortalis** GPU via the Unreal Lumen rendering system. If you are not familiar with Lumen and global illumination, please review this [learning path](/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/) before proceeding. The demo is named **Steel Arms**. Created with Unreal Engine 5.3, Steel Arms brings desktop-level Bloom, Motion Blur, and Depth of Field (DOF) effects, alongside Physically Based Rendering (PBR), to smartphones. -The following screenshots are from scenes in Steel Arms, powered by Unreal Lumen. Several optimization tips and techniques were used for the development of Steel Arms, for achieving the best performance with Lumen. This article will start with an introduction to ray tracing and then cover the best practices for hardware ray tracing in Lumen. +The following screenshots are from scenes in **Steel Arms** which is powered by Unreal Lumen. Several optimization tips and techniques were used in the development of **Steel Arms** for achieving the best performance with Lumen. This learning path will start with an introduction to ray tracing and then cover the best practices for hardware ray tracing in Lumen. ![](images/Garage.png) @@ -20,7 +20,7 @@ The following screenshots are from scenes in Steel Arms, powered by Unreal Lumen ![](images/Garage2.png) -## What is Ray Tracing -Ray tracing is a rendering technique used in computer graphics to simulate the way light interacts with objects in a scene. Essentially, developers can cast a ray in any direction and find the closest intersection between the ray and scene geometries. Arm has implemented this ray tracing technique in hardware to accelerate the speed of ray traversal since the Immortalis-G715 GPU. +## What is Ray Tracing? +Ray tracing is a rendering technique used in computer graphics to simulate the way light interacts with objects in a scene. Essentially, developers can cast a ray in any direction and find the closest intersection between the ray and scene geometries. Arm implemented this ray tracing technique in hardware to accelerate the speed of ray traversal in the Immortalis-G715 GPU. To accelerate the speed of ray traversal, the scene geometry data needs to be organized into a data structure called an **Acceleration Structure**. When finding intersections between rays and scene geometries, the hardware traverses the acceleration structure to quickly locate the intersections. Therefore, the acceleration structure is critical to the performance of hardware ray tracing. The next topic will explain acceleration structures in more detail. diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/2-acceleration-structure.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/2-acceleration-structure.md index aca123b76..c511ba3cd 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/2-acceleration-structure.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/2-acceleration-structure.md @@ -6,27 +6,25 @@ weight: 3 layout: learningpathall --- -## What is an Acceleration Structure +## What is an Acceleration Structure? An acceleration structure is a data structure designed to improve ray traversal speed on ray tracing hardware. It uses a hierarchical tree to store the geometry data of the scene, minimizing the number of hit tests between rays and geometry. Ray tracing hardware typically utilizes this structure to quickly identify the triangles that intersect with a ray. -As shown in Figure 1, the scene is divided into several smaller volumes, and the acceleration structure stores the hierarchical tree of all these smaller volumes. During ray traversal, the hardware can use this structure to quickly locate the volumes that intersect with a ray. Subsequently, the hardware can find the intersections between the ray and the triangles within those volumes. +As shown in the image below, the scene is divided into boxes, and the acceleration structure stores the hierarchical tree of all these smaller boxes. During ray traversal, the hardware can use this structure to quickly locate the boxes that intersect with a ray. Subsequently, the hardware can then find the intersections between the ray and the triangles within those volumes. ![](images/as2.png "Figure1. The acceleration structure used to represent a scene.") - -In a typical game scene, there are often many duplicated objects that share the same geometry data but have different instance attributes, such as position or color. Therefore, the acceleration structure is divided into two levels: the Top Level Acceleration Structure (TLAS) and the Bottom Level Acceleration Structure (BLAS), as shown in Figure 2. +In a typical game scene, there are often many duplicated objects that share the same geometry data but have different instance attributes, such as position or color. Therefore, the acceleration structure is divided into two levels: the Top Level Acceleration Structure (TLAS) and the Bottom Level Acceleration Structure (BLAS), as shown below. ![](images/as.png "Figure2. The acceleration structure tree.") ### TLAS  -The TLAS stores only the instancing data of the BLAS, such as the transform data of each instance. It does not store any geometry data; instead, it references the relevant BLAS. This approach allows the hardware to save memory by storing a single set of geometry data for many duplicated objects. +The TLAS only stores the instancing data of the BLAS, such as the transform data of each instance. It does not store any geometry data. Instead, it references the relevant BLAS. This approach allows the hardware to save memory by storing a single set of geometry data for many duplicated objects. ### BLAS -The BLAS stores the geometry data and hierarchical bounding volumes of the scene. Multiple instances in the TLAS can point to a single BLAS. - +The BLAS stores the geometry data and hierarchical bounding boxes of the scene. Multiple instances in the TLAS can point to a single BLAS. diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/3-lumen-general.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/3-lumen-general.md deleted file mode 100644 index 0edb8939e..000000000 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/3-lumen-general.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: Lumen General Setting Optimizations -weight: 4 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -To use Lumen in your game scene, you can add a **Post Process Volume** actor. Within the details panel of **Post Process Volume** actor, there are several options you can tweak. During the development of our Lumen content, **“Steel Arms”**, we found a few recommended values for these options that are suitable for Android devices. Figures 1 and 2 show all the option values we used in **“Steel Arms”**. - -## Global Illumination Settings -![](images/gl-setting.png "Figure1. These global illumination parameters are used in our Lumen content - Steel Arms.") - -### Lumen Scene Detail -• Higher values ensure smaller objects contribute to Lumen lighting but increase GPU cost. - -### Final Gather Quality -• Controls the density of the screen probes; higher values increase GPU cost. - -• **1.0** strikes a good balance between performance and quality for mobile games. - -### Max Trace Distance -• Controls how far the ray tracing will go; keeping it small decreases GPU cost. - -• Do not set it larger than the size of the scene. - -• Smaller values can also reduce ray incoherence. - -### Scene Capture Cache Resolution Scale -• Controls the surface cache resolution; smaller values save memory. - -### Lumen Scene Lighting Update Speed -• Can be kept low if lighting changes are slow to save GPU cost. - -• **0.5 ~ 1.0** strikes a good balance between performance and quality for mobile games. - -### Final Gather Lighting Update Speed -• Can be kept low if slow lighting propagation is acceptable. - -• **0.5 ~ 1.0** strikes a good balance between performance and quality for mobile games. - - ## Lumen Reflection Settings -![](images/reflection-setting.png "Figure 2. These reflection parameters are used in our Lumen content - Steel Arms.") - -### Reflection Quality -• Controls the reflection tracing quality, essentially the resolution of the reflection. - -### Ray Lighting Mode -• The default mode is `Surface Cache`, which reuses surface cache data for reflection. - -• `Hit Lighting` mode is available when using hardware ray tracing; it evaluates direct lighting instead of using the surface cache. - -• `Hit Lighting` mode offers higher quality but at a higher GPU cost. - -• `Hit Lighting` mode can reflect lighting of skinned meshes, which `Surface Cache` mode cannot. - -• `Hit Lighting` mode is not supported yet on mobile device. - -• `Surface Cache` mode is recommended for mobile games. - -### Max Reflection Bounces -• Controls the number of reflection bounces; higher values increase GPU cost. - -• **1** is recommended for mobile games. \ No newline at end of file diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/4-add-object.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/4-add-object.md index d7bbdc4cd..df1bbac87 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/4-add-object.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/4-add-object.md @@ -1,14 +1,14 @@ --- title: Only Add Important Objects into Ray Tracing -weight: 5 +weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- -The acceleration structure stores all geometry data for ray traversal. This means that ray traversal can be faster if there is less geometry data in the acceleration structure. The first optimization is to remove unnecessary geometry data from the acceleration structure. +As explained in the previous section, the acceleration structure stores all geometry data for ray traversal. This means that ray traversal can be faster if there is less geometry data in the acceleration structure. The first optimization is to remove unnecessary geometry data from the acceleration structure. -Exclude actors that do not contribute to lighting from ray tracing. Additionally, exclude small actors from ray tracing since they contribute very little to the final lighting and may cause noise in indirect lighting. In the actor details panel, uncheck `Visible in Ray Tracing` to exclude the actor from ray tracing. +It's important to exclude actors that do not contribute to lighting from ray tracing. Additionally, exclude small actors from ray tracing since they contribute very little to the final lighting and may cause noise in indirect lighting. To do this, in the actor details panel, uncheck **Visible in Ray Tracing** to exclude these actors from ray tracing. ![](images/add_object.png) diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/5-instancing.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/5-instancing.md index 520736164..79b62019b 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/5-instancing.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/5-instancing.md @@ -1,21 +1,24 @@ --- title: Take Full Advantage of Instancing -weight: 6 +weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Instanced actors can share the same geometry data in the BLAS of the acceleration structure, thereby saving memory usage and increasing cache hits. To achieve the best performance and memory efficiency when using hardware ray tracing, you should use object instancing as much as possible. +The next optimization is around instanced actors. Instanced actors can share the same geometry data in the BLAS of the acceleration structure, thereby saving memory usage and increasing cache hits. To achieve the best performance and memory efficiency when using hardware ray tracing, you should use object instancing as much as possible. -You can also use the `Picker` view under the `Ray Tracing Debug` options in the Unreal Editor to check the instancing status of the acceleration structure. Here are the steps for checking the instancing status in Unreal Editor: +Here are the steps for checking the instancing status in Unreal Editor: + +1. Use command `r.RayTracing.Debug.PickerDomain 1` to select the instance mode for picker: -1. Use command `r.RayTracing.Debug.PickerDomain 1` to select instance mode for picker. ![](images/picker-command.png) -2. Select `Picker` view under `Ray Tracing Debug` on viewport of Unreal editor. +2. In the Unreal editor, under **Ray Tracing Debug**, select **Picker**: + ![](images/picker-view.png) -3. Use the mouse cursor to select the instance you want to check. The acceleration structure information for this instance will be displayed on the screen. Use the detailed information under `[BLAS]` to verify if two instances share the same BLAS data. +3. Use the mouse cursor to select the instance you want to check. The acceleration structure information for this instance will be displayed on the screen. Use the detailed information under **[BLAS]** to verify if two instances share the same BLAS data: + ![](images/blas.png) diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/6-optimize-as.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/6-optimize-as.md index 5dce3247a..6de4a6ade 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/6-optimize-as.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/6-optimize-as.md @@ -1,6 +1,6 @@ --- title: Optimize Acceleration Structure -weight: 7 +weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall @@ -8,14 +8,14 @@ layout: learningpathall To achieve the best performance in ray traversal, it is essential to minimize the overlap of meshes. Overlapping meshes increase the cost of ray traversal because the hardware needs to check more meshes, many of which may not intersect with the ray. Therefore, it is crucial to ensure that the bounding box of each actor covers the least amount of empty space. -You can use the `Instance Overlap` view under the `Ray Tracing Debug` options in the Unreal Editor to check the overlap in your level. +You can use the **Instance Overlap** view under the **Ray Tracing Debug** options in the Unreal Editor to check the overlap in your level, as shown below: ![Instance Overlap View](images/instance-overlap.png) -The color in the view indicates the degree of overlap. The closer the color is to yellow, the more overlap there is. In the image, you can see a large yellow area in the middle of the screen, indicating significant overlap. This is caused by the mesh that combines all four pillars of the boxing ring and covers the empty area over the stage. +The color in the image shown below indicates the degree of overlap. The closer the color is to yellow, the more overlap there is. In the image, you can see a large yellow area in the middle of the screen, indicating that there is a significant overlap. This is caused by the mesh that combines all four pillars of the boxing ring and covers the empty area over the stage. ![Figure 1. Before acceleration structure optimization.](images/before_opt.png) -After reorganizing the scene objects, we split the pillars into four separate meshes. As shown in the next image, the yellow area has been eliminated, resulting in less overlap in the same scene. +After reorganizing the scene objects, we have split the pillars into four separate meshes. As shown in the next image, the yellow area has been eliminated, resulting in less overlap in the same scene. ![Figure 2. After acceleration structure optimization.](images/after_opt.png) diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/7-run-time-update.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/7-run-time-update.md index 2bbc76c01..940c064b0 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/7-run-time-update.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/7-run-time-update.md @@ -1,15 +1,15 @@ --- title: Reduce Acceleration Structure Run-time Update Cost -weight: 8 +weight: 7 ### FIXED, DO NOT MODIFY layout: learningpathall --- -The acceleration structure for static meshes is built offline and does not require updates at run-time. However, the acceleration structure for skinned meshes needs to be updated at run-time, which incurs a performance cost due to the need to update geometry data on the GPU and rebuild the acceleration structure. When the polygon count of a skinned mesh is high, the update cost can be too high for real-time applications. To reduce this cost, you can use a higher LOD (Level of Detail) level for skinned meshes in ray tracing, resulting in fewer polygons needing updates at run-time. +We now turn to run-time. The acceleration structure for static meshes is built offline and does not require updates at run-time. However, the acceleration structure for skinned meshes needs to be updated at run-time, which incurs a performance cost due to the need to update geometry data on the GPU and rebuild the acceleration structure. When the polygon count of a skinned mesh is high, the update cost can be too high for real-time applications. To reduce this cost, you can use a higher LOD (Level of Detail) level for skinned meshes in ray tracing, resulting in fewer polygons needing updates at run-time, as shown below: ![Figure 1. Select higher LOD for ray tracing in Unreal editor.](images/skin-lod.png) -It is important to note that using a higher LOD level for skinned meshes in ray tracing may cause self-shadowing artifacts if ray tracing shadows are enabled. These artifacts are caused by differences between the rendering mesh and the ray tracing mesh. +Note that using a higher LOD level for skinned meshes in ray tracing may cause self-shadowing artifacts, if ray tracing shadows are enabled. These artifacts are caused by differences between the rendering mesh and the ray tracing mesh, as illustrated below: ![Figure 2. The black areas are the self-shadowing artifacts generated by using different LODs for rendering and ray tracing shadows.](images/skin-lod-error.png) diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/8-lumen-general.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/8-lumen-general.md new file mode 100644 index 000000000..b16b19a49 --- /dev/null +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/8-lumen-general.md @@ -0,0 +1,58 @@ +--- +title: Lumen General Setting Optimizations +weight: 8 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +We now look at how best to optimize with the Global Illumination settings. To use Lumen in your game scene, you can add a **Post Process Volume** actor (found under Place Actors > Visual Effects). Within the details panel of **Post Process Volume**, there are several options that you can change and amend. During the development of the Lumen content, **“Steel Arms”**, we found a few recommended values for these options that are suitable for Android devices. The following images show all of the option values that were used in **“Steel Arms”** with some additional explanation provided. + +## Global Illumination Settings +![](images/gl-setting.png "Figure1. These global illumination parameters are used in our Lumen content - Steel Arms.") + +### Lumen Scene Detail +• Higher values ensure that smaller objects contribute to Lumen lighting but they do increase the GPU cost. + +### Final Gather Quality +• This controls the density of the screen probes but note that higher values will increase the GPU cost. + +• **1.0** strikes a good balance between performance and quality for mobile games. + +### Max Trace Distance +• This controls how far the ray tracing will go; keeping the number small decreases the GPU cost. + +• Do not set it to be larger than the size of the scene. + +• Smaller values can also reduce ray incoherence. + +### Scene Capture Cache Resolution Scale +• This controls the surface cache resolution; smaller values save memory. + +### Lumen Scene Lighting Update Speed +• This can be kept low if lighting changes are slow to save GPU cost. + +• **0.5 ~ 1.0** strikes a good balance between performance and quality for mobile games. + +### Final Gather Lighting Update Speed +• This can be kept low if slow lighting propagation is acceptable. + +• **0.5 ~ 1.0** strikes a good balance between performance and quality for mobile games. + + ## Lumen Reflection Settings +![](images/reflection-setting.png "Figure 2. These reflection parameters are used in our Lumen content - Steel Arms.") + +### Reflection Quality +• This controls the reflection tracing quality, essentially the resolution of the reflection. + +### Ray Lighting Mode +• The default (and recommended for mobile games) mode is **Surface Cache**, which reuses surface cache data for reflection. + +• The **Hit Lighting** mode is available when using hardware ray tracing; it evaluates direct lighting instead of using the surface cache. It offers better quality but at a higher GPU cost and can also reflect the lighting of skinned meshes which **Surface Cache** mode cannot. **Hit Lighting** is not yet supported on mobile devices. + +### Max Reflection Bounces +• This controls the number of reflection bounces; higher values increase the GPU cost. + +• **1** is recommended for mobile games. + +This learning path has taken you through some key areas and best practices for using and optimizing hardware ray tracing with Lumen. diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_index.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_index.md index ae92e11b8..87e06d7a9 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_index.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_index.md @@ -1,21 +1,21 @@ --- -title: Best Practices for Hardware Ray Tracing Lumen Performance on Android Devices -draft: true +title: Best Practices for hardware ray tracing with Lumen on Android Devices + minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for Unreal developers interested in optimizing hardware ray tracing with Lumen on Arm devices. +who_is_this_for: This is an introductory topic for Unreal Engine developers interested in optimizing hardware ray tracing with Lumen on android devices. learning_objectives: - Learn about ray tracing. - Understand what an acceleration structure is. - - Learn about best practices for getting the maximum performance of hardware ray tracing on Lumen for Arm devices. + - Learn about the best practices for getting the maximum performance of hardware ray tracing on Lumen for Arm devices. prerequisites: - A computer capable of running [Unreal Engine 5.3 or later version](https://www.unrealengine.com/en-US/download). - An Android mobile device that has a Mali GPU with hardware ray tracing support. - A USB cable to connect the mobile device to your computer. -author_primary: Arm +author_primary: Owen Wu, Arm ### Tags skilllevels: Introductory diff --git a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_review.md b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_review.md index 2bd84f3ce..cbcd7843d 100644 --- a/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_review.md +++ b/content/learning-paths/smartphones-and-mobile/best-practices-for-hwrt-lumen-performance/_review.md @@ -9,7 +9,7 @@ review: - 4 correct_answer: 2 explanation: > - Acceleration structure has 2 levels, TLAS and BLAS. + Acceleration structure has 2 levels: TLAS and BLAS. - questions: question: > @@ -19,7 +19,7 @@ review: - "No" correct_answer: 1 explanation: > - Acceleration structure can be updated at run-time but remember that it has performance cost. + The acceleration structure can be updated at run-time but remember that there is a performance cost. - questions: question: > diff --git a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/1-what-is-lumen.md b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/1-what-is-lumen.md index 4dd87f423..93a800621 100644 --- a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/1-what-is-lumen.md +++ b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/1-what-is-lumen.md @@ -1,18 +1,16 @@ --- -title: What is Lumen +title: What is Lumen? weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Lumen is the latest dynamic global illumination solution in the Unreal Engine, which also supports hardware ray tracing. Lighting indoor scenes which rely solely on direct lighting do not produce high-quality rendering results. To achieve superior indoor lighting quality, you also need indirect lighting. +Lumen is a dynamic global illumination and reflections system in Unreal Engine 5, which also supports hardware ray tracing. Lighting indoor scenes using only direct lighting does not produce high-quality rendering results. To achieve a much better quality of indoor lighting, you also need to use indirect lighting but rendering dynamic indirect lighting is computationally expensive. -Rendering dynamic indirect lighting used to be computationally expensive, but Lumen introduces a new ray-tracing based solution that allows developers to render both dynamic direct lighting and indirect lighting in real-time. +Lumen introduces a new ray-tracing based solution that allows developers to render both dynamic direct lighting and indirect lighting in real-time. -You can observe the improvements in rendering quality by comparing two images. The first image considers only direct lighting, and details in areas beyond the direct lighting range (such as the background) are not visible. - -In contrast, the second image utilizes Lumen lighting, incorporating both direct and indirect lighting. Now, you can discern many details in the background that were previously hidden, as Lumen takes light bounces into account. +You can see the improvements in the rendering quality by comparing the following two images of the same scene. The first image uses only direct lighting where details in the areas beyond the direct lighting range (such as the background) are not visible. In contrast, the second image utilizes Lumen lighting, incorporating both direct and indirect lighting. Now, you can discern many more details in the background that were previously hidden, as Lumen takes light bounces into account. ![](images/no_lumen.png "Figure 1. The scene without Lumen has only dirct lighting.") diff --git a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/2-what-is-gi.md b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/2-what-is-gi.md index a642a032d..ac1a80b6e 100644 --- a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/2-what-is-gi.md +++ b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/2-what-is-gi.md @@ -1,12 +1,12 @@ --- -title: What is Global Illumination +title: What is Global Illumination? weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- -Global illumination (GI) is a sophisticated lighting technique that considers the entire scene when calculating illumination. It meticulously accounts for how photons bounce between surfaces, resulting in realistic lighting effects. In GI-rendered images, four critical attributes stand out: +Global Illumination (GI) is a sophisticated lighting technique that considers the entire scene when calculating illumination. It meticulously accounts for how photons bounce between surfaces, resulting in realistic lighting effects. In GI-rendered images, four critical attributes stand out: 1. **Direct Lighting**: These are the well-defined areas directly illuminated by light sources. Think of sunlight streaming through a window or a spotlight on a stage. @@ -16,7 +16,7 @@ Global illumination (GI) is a sophisticated lighting technique that considers th 4. **Color Bleeding**: Imagine a red wall casting a warm glow onto nearby objects. Color bleeding occurs when bounced photons carry the color of the reflecting surface, subtly tinting adjacent elements. -For a practical example, consider the Cornell box scene rendered using Unreal Engine's Lumen lighting system. In this image, you'll easily observe how all four attributes have been impeccably captured. +For a practical example, consider the following Cornell box scene which has been rendered using Unreal Engine's Lumen lighting system. In this image, you can easily observe how all four attributes have been successfully captured. ![](images/cornell_box.png "Figure 1. Cornell box scene generated by Unreal Lumen system.") diff --git a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/3-how-to-enable-lumen.md b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/3-how-to-enable-lumen.md index 07e1d866d..86d5b7772 100644 --- a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/3-how-to-enable-lumen.md +++ b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/3-how-to-enable-lumen.md @@ -6,12 +6,12 @@ weight: 4 layout: learningpathall --- -To enable Lumen in your Unreal project, open the Project Settings window and select `Lumen` in the `Global Illumination` section under `Engine - Rendering`, as shown in Figure 1. +To enable Lumen in your Unreal project, open the Project Settings window, select **Engine - Rendering** and then select **Lumen** in the **Global Illumination** section, as shown below. -The reflection setting is separate from Global Illumination, but you can also select `Lumen` as the reflection method in the `Reflections` section under `Engine - Rendering`, as shown in Figure 1. +Next, select **Lumen** as the reflection method in the **Reflections** section, as shown below. ![](images/enable_lumen.png "Figure 1. Select Lumen as Global Illumination and Reflections method.") -Another way to enable Lumen is by creating a **Post Process Volume** actor in the scene and selecting `Lumen` in the `Global Illumination` sections in the Post Process Volume details panel, as shown in Figure 2. You can also select `Lumen` as the reflection method here. +Alternatively, another way to enable Lumen is by creating a **Post Process Volume** actor in the scene and selecting **Lumen** in the **Global Illumination** sections in the Post Process Volume details panel, as shown in the following image. You can also select **Lumen** as the reflection method here as well. ![](images/postprocess.png "Figure 2. Select Lumen as Global Illumination and Reflections method in Post Process Volume details panel.") diff --git a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/4-how-to-enable-hw-rt.md b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/4-how-to-enable-hw-rt.md index b162379e0..5b6dc1531 100644 --- a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/4-how-to-enable-hw-rt.md +++ b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/4-how-to-enable-hw-rt.md @@ -9,30 +9,30 @@ layout: learningpathall To harness the power of hardware ray tracing with Lumen in your application, follow these steps to configure your project settings: 1. **Enable SM5 Shader Format**: - - Lumen operates only when the SM5 shader format is active. Navigate to your project settings under `Platforms - Android`. - - Enable the option labeled `Support Vulkan Desktop [Experimental]` to activate SM5 shader format support. + - Lumen operates only when the SM5 shader format is active. Navigate to your project settings under **Platforms - Android**. + - Enable the option labeled **Support Vulkan Desktop [Experimental]** to activate SM5 shader format support, as shown below. ![SM5 Shader Format](images/sm5.png) 2. **Select Deferred Shading Mode**: - - Currently, Lumen exclusively supports deferred shading mode. Head to your project settings under `Engine - Rendering`. - - Uncheck `Forward Shading` under the `Forward Renderer` option. + - Currently, Lumen exclusively supports deferred shading mode. Go to your project settings under **Engine - Rendering**. + - Uncheck **Forward Shading** under the **Forward Renderer** option, as shown below. ![Deferred Shading](images/deferred.png) 3. **Enable Hardware Ray Tracing Support**: - - First, enable hardware ray tracing for the entire engine. In your project settings under `Engine - Rendering`, check the box for `Support Hardware Ray Tracing`. + - Next, enable hardware ray tracing for the entire engine. In your project settings under **Engine - Rendering**, check the box for **Support Hardware Ray Tracing**, as shown below. ![Hardware Ray Tracing](images/hwrt.png) 4. **Use Hardware Ray Tracing When Available**: - - Lumen can utilize both software and hardware ray tracing. To prioritize hardware acceleration, enable `Use Hardware Ray Tracing when available` in the project settings under `Engine - Rendering`. + - As previously mentioned, Lumen can utilize both software and hardware ray tracing. To prioritize hardware acceleration, check **Use Hardware Ray Tracing when available** in the project settings under **Engine - Rendering**, as shown below. - This setting ensures that your application leverages hardware ray tracing first, falling back to software ray tracing if necessary. ![Use Hardware Ray Tracing](images/hwrt_lumen.png) 5. **Configure Console Variables**: - - Edit your engine configuration file to set up two essential console variables: + - Edit your engine configuration file to set up two essential console variables as per the following: - Set `r.Android.DisableVulkanSM5Support=0` to remove any restrictions related to SM5 shader format usage. - Enable Ray Query shader support for Vulkan by setting `r.RayTracing.AllowInline=1`. @@ -41,4 +41,4 @@ To harness the power of hardware ray tracing with Lumen in your application, fol r.RayTracing.AllowInline=1 ``` -Now your Lumen powered application is ready to dazzle with hardware-accelerated ray tracing! +Your Lumen-powered application is now ready to dazzle with hardware-accelerated ray tracing! diff --git a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_index.md b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_index.md index 72110cf45..a7b8714af 100644 --- a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_index.md +++ b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_index.md @@ -1,9 +1,9 @@ --- title: How to Enable Hardware Ray Tracing on Lumen for Android Devices -draft: true + minutes_to_complete: 10 -who_is_this_for: This is an introductory topics for Unreal developers interested in using hardware ray tracing with Lumen on Arm devices. +who_is_this_for: This is an introductory topic for Unreal Engine developers interested in using hardware ray tracing with Lumen on Arm devices. learning_objectives: - Learn about Lumen and global illumination. @@ -14,7 +14,7 @@ prerequisites: - An Android mobile device that has a Mali GPU with hardware ray tracing support. - A USB cable to connect the mobile device to your computer. -author_primary: Arm +author_primary: Owen Wu, Arm ### Tags skilllevels: Introductory diff --git a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_review.md b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_review.md index 925a30511..9839f1d74 100644 --- a/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_review.md +++ b/content/learning-paths/smartphones-and-mobile/how-to-enable-hwrt-on-lumen-for-android-devices/_review.md @@ -13,17 +13,17 @@ review: - questions: question: > - Lumen only supports software ray tracing. + Lumen only supports software ray tracing? answers: - "True" - "False" correct_answer: 2 explanation: > - Lumen can support hardware ray tracing if the hardware has ray tracing feature. + Lumen can support hardware ray tracing if the hardware has the ray tracing feature. - questions: question: > - Lumen can work with forward shading mode. + Lumen can work with forward shading mode? answers: - "True" - "False" diff --git a/content/learning-paths/smartphones-and-mobile/libgpuinfo/_index.md b/content/learning-paths/smartphones-and-mobile/libgpuinfo/_index.md index a3f1d2a37..29bbb6aa5 100644 --- a/content/learning-paths/smartphones-and-mobile/libgpuinfo/_index.md +++ b/content/learning-paths/smartphones-and-mobile/libgpuinfo/_index.md @@ -1,7 +1,7 @@ --- title: Query Arm GPU configuration information -minutes_to_complete: 10 +minutes_to_complete: 15 who_is_this_for: This is an introductory topic for Android developers who want to adjust application complexity to match device performance. diff --git a/contributors.csv b/contributors.csv index df2237b63..a251190de 100644 --- a/contributors.csv +++ b/contributors.csv @@ -34,3 +34,4 @@ Johanna Skinnider,Arm,,,, Varun Chari,Arm,,,, Adnan AlSinan,Arm,,,, Graham Woodward,Arm,,,, +Basma El Gaabouri,Arm,,,, diff --git a/data/stats_current_test_info.yml b/data/stats_current_test_info.yml index 8ed27d327..92f9fa6dc 100644 --- a/data/stats_current_test_info.yml +++ b/data/stats_current_test_info.yml @@ -1,5 +1,5 @@ summary: - content_total: 274 + content_total: 273 content_with_all_tests_passing: 32 content_with_tests_enabled: 33 sw_categories: diff --git a/data/stats_weekly_data.yml b/data/stats_weekly_data.yml index ab8fcced3..91d740cba 100644 --- a/data/stats_weekly_data.yml +++ b/data/stats_weekly_data.yml @@ -3030,3 +3030,70 @@ avg_close_time_hrs: 0 num_issues: 12 percent_closed_vs_total: 0.0 +- a_date: '2024-08-05' + content: + cross-platform: 22 + embedded-systems: 17 + install-guides: 84 + laptops-and-desktops: 30 + microcontrollers: 24 + servers-and-cloud-computing: 76 + smartphones-and-mobile: 20 + total: 273 + contributions: + external: 35 + internal: 315 + github_engagement: + num_forks: 30 + num_prs: 8 + individual_authors: + arm: 3 + arnaud-de-grandmaison: 1 + bolt-liu: 2 + brenda-strech: 1 + christopher-seidl: 7 + daniel-gubay: 1 + daniel-nguyen: 1 + david-spickett: 2 + dawid-borycki: 25 + diego-russo: 1 + diego-russo-and-leandro-nunes: 1 + elham-harirpoush: 2 + florent-lebeau: 5 + "fr\xE9d\xE9ric--lefred--descamps": 2 + gabriel-peterson: 5 + gayathri-narayana-yegna-narayanan: 1 + graham-woodward: 1 + james-whitaker,-arm: 1 + jason-andrews: 83 + johanna-skinnider: 2 + jonathan-davies: 2 + jose-emilio-munoz-lopez,-arm: 1 + julie-gaskin: 4 + julio-suarez: 5 + kasper-mecklenburg: 1 + konstantinos-margaritis: 7 + kristof-beyls: 1 + liliya-wu: 1 + mathias-brossard: 1 + michael-hall: 5 + pareena-verma: 35 + pareena-verma,-jason-andrews,-and-zach-lasiuk: 1 + pareena-verma,-joe-stech,-adnan-alsinan: 1 + pranay-bakre: 2 + przemyslaw-wirkus: 1 + roberto-lopez-mendez: 2 + ronan-synnott: 46 + thirdai: 1 + tom-pilar: 1 + uma-ramalingam: 1 + varun-chari: 1 + visualsilicon: 1 + ying-yu: 1 + ying-yu,-arm: 1 + zach-lasiuk: 1 + zhengjun-xing: 2 + issues: + avg_close_time_hrs: 0 + num_issues: 11 + percent_closed_vs_total: 0.0 diff --git a/tools/report.py b/tools/report.py index 2052b91b6..a913f7b4b 100644 --- a/tools/report.py +++ b/tools/report.py @@ -1,13 +1,11 @@ #/usr/bin/env python3 -import argparse import logging import os import subprocess import csv import json import re -from pathlib import Path from datetime import datetime, timedelta # List of directories to parse for learning paths @@ -20,57 +18,70 @@ "content/learning-paths/servers-and-cloud-computing"] + + +''' +Returns the date (yyyy-mm-dd) which a file in the given directory was last updated. +If Learning Path, changes in any file in the directory will count. +''' +def get_latest_updated(directory, is_lp, item): + article_path = directory if is_lp else f"{directory}/{item}" + date = subprocess.run(["git", "log", "-1" ,"--format=%cs", str(article_path)], stdout=subprocess.PIPE) + return date + ''' -Recursive content search in d. -Returns: -- list of articles older than period in d -- count of articles found in d -- list of primary authors in d +Recursive content search in a given directory. +Returns: +- list of articles older than a given period found +- count of articles found +- list of primary authors found ''' -def content_parser(d, period): +def content_parser(directory, period): count = 0 art_list = {} auth_list = [] - l = os.listdir(d) - for i in l: + directory_list = os.listdir(directory) + for i in directory_list: item = i + is_lp = False if item.endswith(".md") and not item.startswith("_"): count = count + 1 - if "learning-paths" in d: + if "learning-paths" in directory: item = "_index.md" + is_lp = True - logging.debug("Checking {}...".format(d+"/"+item)) + logging.debug(f"Checking {directory}/{item}") - date = subprocess.run(["git", "log", "-1" ,"--format=%cs", d +"/" + item], stdout=subprocess.PIPE) + date = get_latest_updated(directory, is_lp, item) # strip out '\n' and decode byte to string date = date.stdout.rstrip().decode("utf-8") - logging.debug("Last updated on: " + date) + logging.debug(f"Last updated: {date}") author = "None" - for l in open(d +"/" + item): - if re.search("author_primary", l): + for directory_list in open(directory +"/" + item): + if re.search("author_primary", directory_list): # split and strip out '\n' - author = l.split(": ")[1].rstrip() - logging.debug("Primary author: " + author) + author = directory_list.split(": ")[1].rstrip() + logging.debug(f"Primary author {author}") if not author in auth_list: auth_list.append(author) - # if empty, this is a temporary file which is not part of the repo - if(date != ""): + # if date is None, this is a temporary file which is not part of the repo + if date: date = datetime.strptime(date, "%Y-%m-%d") # check if article is older than the period if date < datetime.now() - timedelta(days = period): - if item == "_index.md": - art_list[d + "/"] = "{} days ago".format((datetime.now() - date).days) + if is_lp: + art_list[directory + "/"] = "{} days ago".format((datetime.now() - date).days) else: - art_list[d + "/" + item] = "{} days ago".format((datetime.now() - date).days) + art_list[directory + "/" + item] = "{} days ago".format((datetime.now() - date).days) - if "learning-paths" in d: + if "learning-paths" in directory: # no need to iterate further break # if this is a folder, let's get down one level deeper - elif os.path.isdir(d + "/" + item): - res, c, a_l = content_parser(d + "/" + item, period) + elif os.path.isdir(directory + "/" + item): + res, c, a_l = content_parser(directory + "/" + item, period) art_list.update(res) count = count + c for a in a_l: @@ -82,12 +93,12 @@ def content_parser(d, period): ''' Initialize Plotly data structure for stats -1 graph on the left with data for install tool guides +1 graph on the left with data for install tool guides 1 graph on the right with data for learning paths Input: title for the graph ''' def init_graph(title): - data = { + data = { "data": [ { "x": [], @@ -139,29 +150,29 @@ def init_graph(title): "xaxis": "x2" } ], - "layout": + "layout": { - "title": title, - "xaxis": + "title": title, + "xaxis": { "tickangle": -45, "domain": [0, 0.45], "anchor": "x1" - }, - "xaxis2": + }, + "xaxis2": { "tickangle": -45, "domain": [0.55, 1], "anchor": "x2" - }, - "barmode": "stack", + }, + "barmode": "stack", "paper_bgcolor": "rgba(0,0,0,0)", "plot_bgcolor": "rgba(0,0,0,0)", - "font": + "font": { "color": "rgba(130,130,130,1)" }, - "legend": + "legend": { "bgcolor": "rgba(0,0,0,0)" } @@ -177,6 +188,7 @@ def init_graph(title): def stats(): global dname + # get working directory orig = os.path.abspath(os.getcwd()) # If file exists, load data. Create structure otherwise @@ -222,25 +234,25 @@ def stats(): contrib_data["data"][d_idx]["y"].append(len(authors)) if "learning-paths" in d: - logging.info("{} Learning Paths found in {} and {} contributor(s).".format(count, d, len(authors))) + logging.info(f"{count} Learning Paths found in {d} and {len(authors)} contributor(s)") total += count else: - logging.info("{} articles found in {} and {} contributor(s).".format(count, d, len(authors))) + logging.info(f"{count} articles found in {d} and {len(authors)} contributor(s)") - logging.info("Total number of Learning Paths is {}.".format(total)) + logging.info(f"Total number of Learning Paths is {total}") - fn_lp='content/stats/lp_data.json' - fn_contrib='content/stats/contrib_data.json' + lp_data_file='content/stats/lp_data.json' + contrib_data_file='content/stats/contrib_data.json' os.chdir(orig) - logging.info("Learning Path data written in " + orig + "/" + fn_lp) - logging.info("Contributors data written in " + orig + "/" + fn_contrib) + logging.info(f"Learning Path data written to {orig}/{lp_data_file}") + logging.info(f"Contributors data written to {orig}/{contrib_data_file}") # Save data in json file - f_lp = open(fn_lp, 'w') - f_contrib = open(fn_contrib, 'w') + f_lp = open(lp_data_file, 'w') + f_contrib = open(contrib_data_file, 'w') # Write results to file json.dump(lp_data, f_lp) - json.dump(contrib_data, f_contrib) + json.dump(contrib_data, f_contrib) # Closing JSON file f_lp.close() f_contrib.close() @@ -253,33 +265,35 @@ def stats(): def report(period): global dname - orig = os.path.abspath(os.getcwd()) - # chdir to the root folder + # get working directory + function_start_directory = os.path.abspath(os.getcwd()) + # change directory to the repository root os.chdir(os.path.dirname(os.path.abspath(__file__)) + "/..") result = {} - total=0 - for d_idx, d in enumerate(dname): - res, count, authors = content_parser(d, period) + + for d_idx, directory in enumerate(dname): + res, count, _ = content_parser(directory, period) result.update(res) - if "learning-paths" in d: - logging.info("Found {} Learning Paths in {}. {} of them are outdated.".format(count, d, len(res))) + if "learning-paths" in directory: total += count - else: - logging.info("Found {} articles in {}. {} of them are outdated.".format(count, d, len(res))) - logging.info("Total number of Learning Paths is {}.".format(total)) + logging.info(f"Found {count} Learning Paths in {directory}. {len(res)} of them are outdated") + - fn="outdated_files.csv" + logging.info(f"Total number of Learning Paths is {total}") + + outdated_files_csv="outdated_files.csv" fields=["File", "Last updated"] - os.chdir(orig) - logging.info("Results written in " + orig + "/" + fn) + # Moving back to the directory which we started in + os.chdir(function_start_directory) - with open(fn, 'w') as csvfile: + with open(outdated_files_csv, 'w') as csvfile: writer = csv.DictWriter(csvfile, fieldnames = fields) writer.writeheader() for key in result.keys(): csvfile.write("%s, %s\n" % (key, result[key])) + logging.info(f"Results written to {function_start_directory}/{outdated_files_csv}")