Skip to content

Commit

Permalink
Merge pull request #1159 from ArmDeveloperEcosystem/main
Browse files Browse the repository at this point in the history
Merge to production
  • Loading branch information
pareenaverma committed Aug 9, 2024
2 parents 682c66d + 5d73d09 commit c5e3745
Show file tree
Hide file tree
Showing 67 changed files with 455 additions and 337 deletions.
2 changes: 1 addition & 1 deletion content/install-guides/browsers/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Here is a quick summary to get you started:
| Brave | native | yes |
| Chrome | native | no |
| Edge | native | no |
| Vivaldi | emulation | yes |
| Vivaldi | native (snapshot) | yes |

Windows on Arm runs native ARM64 applications, but can also emulate 32-bit x86 and 64-bit x64 applications. Emulation is slower than native and shortens battery life, but may provide functionality you need.

Expand Down
10 changes: 7 additions & 3 deletions content/install-guides/browsers/vivaldi.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,12 @@ layout: installtoolsall # DO NOT MODIFY. Always true for tool install ar

## Installing Vivaldi

The Vivaldi browser runs on Windows using emulation, but Vivaldi is available for Arm Linux.
Vivaldi is available for Arm Linux and Windows on Arm.

Visit [Download Vivaldi](https://vivaldi.com/download/) to obtain packages for various operating systems.



### Linux

Vivaldi is available for Arm Linux.
Expand Down Expand Up @@ -64,7 +66,9 @@ sudo dnf install vivaldi-stable -y

### Windows

Vivaldi runs on Windows on Arm using emulation.
The stable release of Vivaldi runs on Windows using emulation, but there is a snapshot release for ARM64.

Stable Vivaldi runs on Windows on Arm using emulation.

Emulation is slower than native and shortens battery life, but may provide required functionality.

Expand All @@ -75,6 +79,6 @@ Emulation is slower than native and shortens battery life, but may provide requi
3. Find and start Vivaldi from the applications menu



The [snapshot release](https://downloads.vivaldi.com/snapshot/Vivaldi.6.9.3425.3.arm64.exe) is available as a native ARM64 application. Install the snapshot release using the same 3 steps above.


Original file line number Diff line number Diff line change
@@ -1,32 +1,32 @@
---
title: "Create Google Axion based GitLab self-hosted runner"
title: "Create a Google Axion-based GitLab self-hosted runner"

weight: 2

layout: "learningpathall"
---

## What is a GitLab runner?
A GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline. It acts as an agent that executes the jobs defined in your GitLab CI/CD configuration. Some key points to note about GitLab Runner:
A GitLab Runner works with GitLab CI/CD to run jobs in a pipeline. It acts as an agent and executes the jobs you define in your GitLab CI/CD configuration. Some key points to note about GitLab Runner:

1. GitLab offers multiple types of runners - You can use GitLab-hosted runners, self-managed runners, or a combination of both. GitLab-hosted runners are managed by GitLab, while self-managed runners are installed and managed on your own infrastructure.
1. GitLab offers multiple types of runners - You can use GitLab-hosted runners, self-managed runners, or a combination of both. GitLab manages GitLab-hosted runners, while you install and manage self-managed runners on your own infrastructure.

2. Each runner is configured as an Executor - When you register a runner, you choose an executor, which determines the environment in which the job runs. Executors can be Docker, Shell, Kubernetes, etc.

3. Multi-architecture support: GitLab runners support multiple architectures including - `x86/amd64` and `arm64`

## What is Google Axion?
Google Axion is the latest generation of Arm Neoverse based CPU offered by Google Cloud. The VM instances are part of the `C4A` family of compute instances. To learn more about Google Axion refer to this [blog](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu).
Axion is Google’s first Arm-based server processor, built using the Armv9 Neoverse V2 CPU. The VM instances are part of the `C4A` family of compute instances. To learn more about Google Axion refer to this [blog](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu).

Note: These `C4A` VM instances are in public preview and needs a signup to be enabled in your Google Cloud account/project.

## Install GitLab runner on a Google Axion VM

Create a repository in your GitLab account by clicking on "+" sign on top-left corner. Click on `New project/repository` and select a blank project, provide a name and initiate your project/repository.
Create a repository in your GitLab account by clicking the "+" sign on top-left corner. Click on `New project/repository` and select a blank project, provide a name and initiate your project/repository.

![repository #center](_images/repository.png)

Once the repository gets created, in the left-hand pane, navigate to `Settings->CI/CD`. Expand the `Runners` section and under `Project Runners`, select `New Project Runner`.
After you create the repository, navigate to `Settings->CI/CD` in the left-hand pane. Expand the `Runners` section and under `Project Runners`, select `New Project Runner`.

![arm64-runner #center](_images/create-gitlab-runner.png)

Expand Down Expand Up @@ -57,7 +57,7 @@ Install the runner and run it as a service
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start
```
In the GitLab console, on the `Register Runner` page, copy the command from `Step 1` and run it on the `C4A` VM. It should look like below
In the GitLab console, on the `Register Runner` page, copy the command from `Step 1` and run it on the `C4A` VM. The command should look like this:
```console
sudo gitlab-runner register --url https://gitlab.com --token <your-runner-token>
```
Expand All @@ -69,7 +69,7 @@ After answering the prompts, the runner should be registered and you should see
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
```

You should be able to see the newly registered runner in the `Runners` section of GitLab console like below
You should see the newly registered runner in the Runners section of the GitLab console as shown below.

![registered-runner #center](_images/registered-runner.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ A multi-architecture application is designed to run on multiple architectures, t
In this learning path you will use the `docker manifest` way to build a multi-architecture image.

## Create a Docker repository in Google Artifact Registry
You can create a docker repository in the Google Artifact Registry from the Google Cloud UI or with the following command:
You can create a Docker repository in the Google Artifact Registry from the Google Cloud UI or with the following command:

```console
gcloud artifacts repositories create quickstart-docker-repo --repository-format=docker \
Expand All @@ -31,7 +31,7 @@ gcloud auth configure-docker us-central1-docker.pkg.dev

## Create project variables

Create the following variables in your GitLab repository by navigating to `CI/CD->Variables`. Expand the section and click on `Add Variable`
Create the following variables in your GitLab repository by navigating to `CI/CD->Variables`. Expand the section and click on `Add Variable`.

- `GCP_PROJECT` - Your Google Cloud project ID that also hosts the Google Artifact registry
- `GKE_ZONE` - Zone for your GKE cluster - e.g. - us-central1-c
Expand Down Expand Up @@ -103,7 +103,7 @@ RUN apk add --update --no-cache file
CMD [ "/hello" ]
```

The `deployment.yaml` file references the multi-architecture docker image from the registry. Create the file using the contents below
The `deployment.yaml` file references the multi-architecture docker image from the registry. Create the file using the contents below.
```yml
apiVersion: apps/v1
kind: Deployment
Expand Down Expand Up @@ -142,7 +142,7 @@ spec:
requests:
cpu: 300m
```
Create a Kubernetes Service of type `LoadBalancer` with the following `hello-service.yaml` contents
Create a Kubernetes Service of type `LoadBalancer` with the following `hello-service.yaml` contents:

```yml
apiVersion: v1
Expand All @@ -164,7 +164,7 @@ spec:

## Automate the GitLab CI/CD build process

In this section, the individual docker images for each architecture - `amd64` and `arm64`- are built natively on their respective runner. For e.g. the `arm64` build is run on Google Axion runner and vice versa. Each runner is differentiated with `tags` in the pipeline. To build a CI/CD pipeline in GitLab, you'll need to create a `.gitlab-ci.yml` file in your repository. Use the following contents to populate the file
In this section, the individual Docker images for each architecture - `amd64` and `arm64`- are built natively on their respective runner. For example, the `arm64` build is run on Google Axion runner and vice versa. Each runner is differentiated with `tags` in the pipeline. To build a CI/CD pipeline in GitLab, you'll need to create a `.gitlab-ci.yml` file in your repository. Use the following contents to populate the file:

```yml
workflow:
Expand Down Expand Up @@ -220,7 +220,7 @@ deploy:
- kubectl apply -f deployment.yaml
```
This file has three `stages`. A `stage` is a set of commands that are executed in a sequence to achieve the desired result. In the `build` stage of the file there are two jobs that run in parallel. The `arm64-build` job gets executed on the Google Axion based C4A runner and the `amd64-build` job gets executed on an AMD64 based E2 runner. Both of these jobs push a docker image to the registry.
This file has three `stages`. A `stage` is a set of commands that are executed in a sequence to achieve the desired result. In the `build` stage of the file there are two jobs that run in parallel. The `arm64-build` job gets executed on the Google Axion based C4A runner and the `amd64-build` job gets executed on an AMD64 based E2 runner. Both of these jobs push a Docker image to the registry.

In the `manifest` stage, using `docker manifest` command both of these images are joined to create a single multi-architecture image and pushed to the registry.

Expand Down
2 changes: 1 addition & 1 deletion content/learning-paths/cross-platform/gitlab/_index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: How to build a CI/CD pipeline with GitLab on Google Axion
title: Build a CI/CD pipeline with GitLab on Google Axion
draft: true
minutes_to_complete: 30

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
next_step_guidance: You can continue learning about building applications with Arm. The learning path covering deployment of a multi-architecture application with GKE should be the next one in your list.
next_step_guidance: Continue learning about building applications with Arm by reviewing the learning path covering deployment of a multi-architecture application with GKE.

recommended_path: "/learning-paths/servers-and-cloud-computing/gke-multi-arch"

Expand Down
12 changes: 12 additions & 0 deletions content/learning-paths/cross-platform/gitlab/_review.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,18 @@ review:
explanation: >
GitLab allows executing jobs parallely on two different self-hosted runners with tags
- questions:
question: >
What is the primary role of a GitLab Runner in GitLab CI/CD?
answers:
- To manage the GitLab repository.
- To execute jobs defined in the GitLab CI/CD configuration.
- To provide multi-architecture support for different processors.
- To create virtual machines in Google Cloud.
correct_answer: 2
explanation: >
To execute jobs defined in the GitLab CI/CD configuration.
# ================================================================================
# FIXED, DO NOT MODIFY
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ Shown below is the disassembly output for the `sad_neon` function:

You will notice the use of `uabd` and `udot` assembly instructions that correspond to the `vabdq_u8`/`vdotq_u32` intrinsics.

Now create an equivalent Rust program using `std::arch` `dotprod` intrinsics. Save the contents shown below in a file named `dotprod2.rs`:
Now create an equivalent Rust program using `std::arch` `neon` intrinsics. Save the contents shown below in a file named `dotprod2.rs`:

```Rust
#![feature(stdarch_neon_dotprod)]
Expand Down Expand Up @@ -166,7 +166,7 @@ fn sad_vec(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 {
#[cfg(target_arch = "aarch64")]
{
use std::arch::is_aarch64_feature_detected;
if is_aarch64_feature_detected!("dotprod") {
if is_aarch64_feature_detected!("neon") {
return unsafe { sad_vec_asimd(a, b, w, h) };
}
}
Expand All @@ -175,7 +175,7 @@ fn sad_vec(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 {
}

#[cfg(target_arch = "aarch64")]
#[target_feature(enable = "dotprod")]
#[target_feature(enable = "neon")]
unsafe fn sad_vec_asimd(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 {
use std::arch::aarch64::*;

Expand Down Expand Up @@ -291,13 +291,13 @@ As you have seen, Rust has a very particular way to enable target features. In t
unsafe fn sad_vec_asimd(a: &[u8], b: &[u8], w: usize, h: usize) -> u32 {
```

Remember that `neon` support is implied with `dotprod` so there is no need to add it as well. However, this may not be the case for all extensions so you may have to add both, like this:
Add support for both `neon` and `dotprod` target features as shown:

```Rust
#[target_feature(enable = "neon", enable = "dotprod")]
```

Next, you will also need to add the `#!feature` for the module's code generation at the top of the file:
Next, check that you have added the `#!feature` for the module's code generation at the top of the file:

```Rust
#![feature(stdarch_neon_dotprod)]
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: Deploy firmware on hybrid edge systems using containers
draft: true

minutes_to_complete: 20

who_is_this_for: This learning path is dedicated to developers interested in learning how to deploy software (embedded applications and firmware) onto other processors in the system, using Linux running on the application core.
who_is_this_for: This learning path is for developers interested in learning how to deploy software (embedded applications and firmware) onto other processors in the system, using Linux running on the application core.

learning_objectives:
- Deploy a containerized embedded application onto an Arm Cortex-M core from an Arm Cortex-A core using containerd and K3s.
- Deploy a containerized embedded application onto an Arm Cortex-M core from an Arm Cortex-A core using *containerd* and *K3s*.
- Build a firmware container image.
- Build the hybrid-runtime components.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
next_step_guidance: If you are interested in learning more about deploying IoT applications using Arm Virtual Hardware, please continue the learning path below
next_step_guidance: If you are interested in learning more about deploying IoT applications using Arm Virtual Hardware, please continue to the learning path below

recommended_path: /learning-paths/embedded-systems/avh_balena/

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ review:
- "False"
correct_answer: 2
explanation: >
The firmware runs on a Cortex-M, from containerd's perspective it appears to runs as a normal Linux container.
The firmware runs on a Cortex-M, from `containerd`'s perspective it appears to runs as a normal Linux container.
- questions:
question: >
Expand All @@ -19,9 +19,10 @@ review:
correct_answer: 2
explanation: >
Each embedded core can only run a single container at once.
- questions:
question: >
Which command is used to interact with the hybrid-runtime from containerd?
Which command is used to interact with the hybrid-runtime from `containerd`?
answers:
- ctr run --runtime io.containerd.hybrid hybrid_app_imx8mp:latest test-container
- ctr run --runtime runc hybrid_app_imx8mp:latest test-container
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,24 +10,26 @@ layout: learningpathall

AVH offers a 30-day free trial to use.
- Create an account in [Arm Virtual Hardware](https://app.avh.arm.com/login)
- Once logged in, you should see a similar screen as shown below.
- Click on create device.
- Once logged in, you should see a similar screen as shown in the image below. Click on **Create device**:
![create device alt-text#center](avh_images/avh1.png "Figure 1. Create device")
- Click on Default Project.
- Next, click on **Default Project**:

![select project alt-text#center](avh_images/avh2.png "Figure 2. Select project")
- Select the i.MX 8M Plus device. The platform runs four Cortex-A53 processors.
- Select the **i.MX 8M Plus** device. The platform runs four Cortex-A53 processors:

![Select the i.MX 8M Plus device alt-text#center](avh_images/avh3.png "Figure 3. Select device")
- Select the Yocto Linux (full) (2.2.1) image.

- Select the Yocto Linux (full) (2.2.1) image and click **Select**:

![Select the Yocto Linux (full) (2.2.1) alt-text#center](avh_images/avh4.png "Figure 4. Select the Yocto Linux (full) (2.2.1) image")
- Click Select
![confirm create device alt-text#center](avh_images/avh5.png "Figure 5. Confirm create device")
- Click on **Create device** (note that this could take few minutes):

- Click on Create device, this could take few minutes.
![confirm create device alt-text#center](avh_images/avh5.png "Figure 5. Confirm create device")

- A console to Linux running on the Cortex-A should appear. Use “root” to login.

- Find the IP address for the board model by running the command. This will be needed access the device using SSH.
- Find the IP address for the board model by running the following command (this will be needed to access the device using SSH):
```bash
ip addr
```
Expand All @@ -39,22 +41,23 @@ The GUI on the right side may not work. You can safely ignore the error you see

### Useful AVH tips

The **Connect** pane shows the different ways that you can connect to the simulated board. The IP address specified should be the same as that visible in the output of the `ip addr` command.

![AVH connect interface alt-text#center](avh_images/avh7.png "Figure 7. AVH connect interface")

The “Connect” pane shows the different ways that you can connect to the simulated board. The IP address specified should be the same as that visible in the output of the `ip addr` command.
**Quick Connect** lets you connect SSH to the AVH model without having to use a VPN configuration. Similarly, you can replace `ssh` for `scp` to copy files from and to the virtual device. In order to use Quick Connect, it is necessary to add your public key via the **Manage SSH keys here** link.

“Quick Connect” lets you SSH to the AVH model without having to use a VPN configuration. Similarly, you can replace `ssh` for `scp` to copy files from and to the virtual device. In order to use Quick Connect, it is necessary to add your public key via the `Manage SSH keys here` link.
![Generate SSH key alt-text#center](avh_images/avh8.png "Figure 8. Generate SSH key")

To generate an SSH key, you can run the following command on your machine.
To generate an SSH key, you can run the following command on your machine:
```bash
ssh-keygen -t ed25519
```


## Download the pre-built hybrid-runtime

Once your AVH model is set up, you can download the pre-built hybrid-runtime. This GitHub package contains the runtime and some necessary scripts.
Once your AVH model is set up, you can download the pre-built hybrid-runtime. This GitHub package contains the runtime and some necessary scripts:

```console
wget https://github.com/smarter-project/hybrid-runtime/releases/download/v1.5/hybrid.tar.gz
Expand All @@ -72,12 +75,12 @@ If you want to build the hybrid-runtime on your own, instructions can be found i

For this learning path, there is also a pre-built lightweight Docker container image available on GitHub. You can use it for the `i.MX8M-PLUS-EVK` board. The container image contains a simple FreeRTOS hello-world application built using the NXP SDK.

You can pull the pre-built image onto the AVH model by running the command.
You can pull the pre-built image onto the AVH model by running the following command:

```console
ctr image pull ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest
```
Make sure the container image was pulled successfully. An image with the name *ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest* should appear as an output.
Make sure the container image was pulled successfully. An image with the name *ghcr.io/smarter-project/hybrid-runtime/hello_world_imx8mp:latest* should appear as an output to the following:

```console
ctr image ls
Expand Down
Loading

0 comments on commit c5e3745

Please sign in to comment.