Skip to content
This repository has been archived by the owner on Nov 7, 2024. It is now read-only.

Commit

Permalink
update verification formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
RestlessWanderer committed Aug 22, 2024
1 parent 959788f commit d3e82a5
Show file tree
Hide file tree
Showing 4 changed files with 24 additions and 17 deletions.
6 changes: 6 additions & 0 deletions .github/styles/Vocab/Workshop/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ devcontainer
Dockerfile
eos
github
global_dc_vars
group_vars
host_vars
hostname\w*\b
Expand All @@ -28,9 +29,11 @@ makefile
mkdocs
mlag
mlagpeer
Multicast
netcommon
node_group\w*
node_type_key\w*
peerings
port_profile\w*
(?i)reachability
repo
Expand All @@ -41,8 +44,11 @@ syslog
toolchain
uncomment
untracked
Unicast
vlan
VRF\w*\b
VNIs
VTEPs
yaml
yml
Yu
Expand Down
5 changes: 3 additions & 2 deletions workshops/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ The Arista CI Workshops are intended for engineers looking to learn the fundamen
The workshops are meant to be leveraged within an Arista Test Drive (ATD) lab. You may follow along using a personal environment; additional setup may apply.

- **Workshop #1** - Automation Fundamentals 101
- **Workshop #2** - Arista CI - AVD
- **Workshop #3** - Arista CI - AVD with CI/CD
- **Workshop #2 (L2LS)** - Arista CI / AVD - L2LS
- **Workshop #2 (L3LS)** - Arista CI / AVD - L3LS EVPN/VXLAN
- **Workshop #3** - Arista CI / AVD with CI/CD

## Fundamentals

Expand Down
24 changes: 12 additions & 12 deletions workshops/l3ls/l3ls-lab-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,9 @@ Now, deploy the configurations to Site 1 switches.
make deploy-site-1
```

### **Verification**
#### Verification

Now, lets login to some switches to verify the current configs (`show run`) match the ones created in `intended/configs` folder. We can also check the current state for MLAG, interfaces, BGP peerings for IPv4 underlay, and BGP EVPN overlay peerings.
Now, lets login to some switches to verify the current configs (`show run`) match the ones created in `intended/configs` folder. We can also check the current state for MLAG, interfaces, BGP peerings for IPv4 underlay, and BGP EVPN overlay peerings.

These outputs were taken from `s1-leaf1`:

Expand Down Expand Up @@ -277,7 +277,7 @@ make build-site-1
make deploy-site-1
```

### **Verification**
#### Verification

Now lets go back to node `s1-leaf1` and verify the new SVIs exist, their IP addresses, any changes to the EVPN overlay and corresponding VXLAN configurations, as well as the EVPN control-plane now that we have some layer 3 data interfaces.

Expand Down Expand Up @@ -311,7 +311,7 @@ Now lets go back to node `s1-leaf1` and verify the new SVIs exist, their IP addr

???+ abstract "Where did those VLANs come from?"
You should notice some VLANs that we didn't define anywhere in the `_NETWORK_SERVICES.yml` data model which aren't related to **MLAG**. Specifically, these will be VLAN SVIs ***Vlan1199*** and ***Vlan3009***.

***Vlan1199*** is dynamically created and assigned for the **OVERLAY** vrf to VNI mapping under the VXLAN interface. You can verify this by looking at the **show interface vxlan 1** output. Remember, we defined **VNI 10** as the `vrf_vni` in our data model.
```text
Dynamic VLAN to VNI mapping for 'evpn' is
Expand Down Expand Up @@ -584,7 +584,7 @@ make build-site-1 build-site-2 deploy-site-1 deploy-site-2
???+ tip
Daisy chaining "Makesies" is a great way to run a series of tasks with a single CLI command :grinning:

### **Verification**
#### Verification

Now that we have built and deployed our configurations for our DCI IPv4 underlay connectivity, lets see what was done. Looking at the data model above, we see we only defined a pool of IP addresses with a **/24** mask which AVD will use to auto-alllocated a subnet per connection. Additionally, we can see that `s1-brdr1` connects to its peer `s2-brdr1` via interface `Ethernet4`, and `s1-brdr2` connects to its peer `s2-brdr2` via interface `Ethernet5`. Using that data model, here is what we expect to see configured. You can verify this by logging into each border leaf and checking with **show ip interface brief**.

Expand Down Expand Up @@ -822,7 +822,7 @@ Below you will see the data model snippets from `sites/site_1/group_vars/SITE1_F
make build-site-1 build-site-2 deploy-site-1 deploy-site-2
```

### **Verification**
#### Verification

Now lets check and make sure the correct configurations were build and applied, and the EVPN gateways are functioning.

Expand Down Expand Up @@ -1012,7 +1012,7 @@ From nodes `s1-brdr1` and `s1-brdr2`, we can check the following show commands.

## **Final Fabric Test**

At this point your full Layer 3 Leaf Spine with EVPN VXLAN and EVPN gateway functionality should be ready to go. Lets perform some final tests to verify everything is working.
At this point your full Layer 3 Leaf Spine with EVPN VXLAN and EVPN gateway functionality should be ready to go. Lets perform some final tests to verify everything is working.

From `s1-host1` ping both `s2-host1` & `s2-host2`.

Expand Down Expand Up @@ -1215,9 +1215,9 @@ should also have our newly defined syslog servers.

## **Adding additional VLANs**

One of the many benefits of AVD is the ability to deploy new services very quickly and efficiently by modifying a small amount of data model. Lets add some new VLANs to our fabric.
One of the many benefits of AVD is the ability to deploy new services very quickly and efficiently by modifying a small amount of data model. Lets add some new VLANs to our fabric.

For this we will need to modify the two `_NETWORK_SERVICES.yml` data model vars files. To keep things simple we will add two new VLANs, **30** and **40**.
For this we will need to modify the two `_NETWORK_SERVICES.yml` data model vars files. To keep things simple we will add two new VLANs, **30** and **40**.

Copy the following pieces of data model, and paste right below the last VLAN entry, in both `SITE1_NETWORK_SERVICES.yml` and `SITE2_NETWORK_SERVICES.yml`. Ensure the `-id:` entries all line up.

Expand Down Expand Up @@ -1267,7 +1267,7 @@ Finally, let's build out and deploy our configurations.
make build-site-1 build-site-2 deploy-site-1 deploy-site-2
```
### **Verification**
#### Verification
Now lets jump into one of the nodes, `s1-leaf1`, and check that our new VLAN SVIs were configured, as well as what we see in the VXLAN interface and EVPN table for both local and remote VTEPs.
Expand Down Expand Up @@ -1360,7 +1360,7 @@ Now lets jump into one of the nodes, `s1-leaf1`, and check that our new VLAN SVI
MLAG Shared Router MAC is 021c.73c0.c612
```
3. Now, lets check the EVPN table. We can filter the routes to only the new VLANs by specifying the new VNIs, **10030** and **10040**.
3. Now, lets check the EVPN table. We can filter the routes to only the new VLANs by specifying the new VNIs, **10030** and **10040**.
^^Command^^
Expand Down Expand Up @@ -1755,4 +1755,4 @@ git branch -D add-leafs
Finally, we can go out to our forked copy of the repository and delete the **add-leafs** branch.
???+ success "Great Success!"
Congratulations. You have now successfully completed initial fabric builds and day 2 operational changes without interacting with any switch CLI!
Congratulations. You have now successfully completed initial fabric builds and day 2 operational changes without interacting with any switch CLI!
6 changes: 3 additions & 3 deletions workshops/l3ls/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ To apply AVD variables to the nodes in the fabric, we make use of Ansible group_
Each group_vars file is listed in the following tabs.

=== "SITE1_FABRIC"
At the Fabric level (SITE1_FABRIC), the following variables are defined in **group_vars/SITE1_FABRIC.yml**. The fabric name, design type (l3ls-evpn), node type defaults, interface links, and EVPN gateway functionality are defined at this level. Being a Layer 3 Leaf Spine topology, the leaf nodes will require more variables than the spines. The variables needed for the spines include:
At the Fabric level (SITE1_FABRIC), the following variables are defined in **group_vars/SITE1_FABRIC.yml**. The fabric name, design type (l3ls-evpn), node type defaults, interface links, and EVPN gateway functionality are defined at this level. Being a Layer 3 Leaf Spine topology, the leaf nodes will require more variables than the spines. The variables needed for the spines include:

- loopback_ipv4_pool
- bgp_as
Expand Down Expand Up @@ -218,7 +218,7 @@ Each group_vars file is listed in the following tabs.
```

=== "SITE1_NETWORK_SERVICES"
You add VLANs, VRFS, and EVPN specific parameters to the Fabric by updating the **group_vars/SITE1_NETWORK_SERVICES.yml**. Within the main tenant we will be configuring, we will supply a **mac_vrf_vni_base** value, which will be used for the VLAN to VNI mapping under the VXLAN interface. We will then define a VRF our VLANs will be part of and give that a VNI value for the VRF to VNI mapping. Finally, each VLAN SVI will be configured, given a name, and a single virtual IP address which will end up being configured on all `l3leaf` nodes.
You add VLANs, VRFS, and EVPN specific parameters to the Fabric by updating the **group_vars/SITE1_NETWORK_SERVICES.yml**. Within the main tenant we will be configuring, we will supply a **mac_vrf_vni_base** value, which will be used for the VLAN to VNI mapping under the VXLAN interface. We will then define a VRF our VLANs will be part of and give that a VNI value for the VRF to VNI mapping. Finally, each VLAN SVI will be configured, given a name, and a single virtual IP address which will end up being configured on all `l3leaf` nodes.

``` yaml
---
Expand Down Expand Up @@ -556,7 +556,7 @@ The following diagram shows the P2P links between the four border leafs. The DCI

### Network Services

Fabric Services, such as VLANs, SVIs, and VRFs, are defined in this section. The following Site 1 example defines VLANs and SVIs for VLANs `10` and `20` in the OVERLAY VRF. We also have specified a mac VRF VNI base mapping of 10000. This will add the base mapping to the VLAN ID to come up with the VNI for the VLAN to VNI mapping under the VXLAN interface. Since we have the same VLANs stretched across to Site 2, the network services data model will be exactly the same:
Fabric Services, such as VLANs, SVIs, and VRFs, are defined in this section. The following Site 1 example defines VLANs and SVIs for VLANs `10` and `20` in the OVERLAY VRF. We also have specified a mac VRF VNI base mapping of 10000. This will add the base mapping to the VLAN ID to come up with the VNI for the VLAN to VNI mapping under the VXLAN interface. Since we have the same VLANs stretched across to Site 2, the network services data model will be exactly the same:

``` yaml
---
Expand Down

0 comments on commit d3e82a5

Please sign in to comment.