Important
Decryption password and IP address of the BigIQ License Server will be announced at the start of the lab.
For this lab you will only need a laptop or workstation with: a working installation of docker on any operating system, an Internet connection and a web browser compatible with the aws web console. Any of the aws supported browsers will do:
https://aws.amazon.com/premiumsupport/knowledge-center/browsers-management-console/
- Install Docker:
- Confirm your source IP address:
- From a Linux terminal, MacOS terminal, or Windows PowerShell, launch super-netops docker container. Replace [Decryption Password] with the Decryption Password provided at the start of the lab.
docker run -p 8080:80 -p 2222:22 -it -e decryptPassword=[Decryption Password] f5devcentral/f5-super-netops-container:base
For example, if the Decryption Password is "SuperSecretPass" then the command would be:
docker run -p 8080:80 -p 2222:22 -it -e decryptPassword=SuperSecretPass f5devcentral/f5-super-netops-container:base
Wait until the f5-super-netops has finished launching. All following steps will be done within the SuperNetops Docker Container. There are two ways to access the Supernetops Docker Container:
- Use the already existing Bash shell
- SSH into the Docker Container:
ssh -p 2222 snops@localhost
you will be promted for a password. The password is: "default" After you loged in to the docker container via SSH, change the user to root: Use:
su -
password is "default"
- Clone the git repository for this lab, change to the working directory (marfil-f5-terraform), and run the f5-super-netops-install.sh script.
git clone https://github.com/TonyMarfil/marfil-f5-terraform
cd ./marfil-f5-terraform/
source ./scripts/f5-super-netops-install.sh
- The Script will create new AWS console login with password. When prompted, enter an email address of your choice that will be used as AWS console login and a new AWS console password. The email address will also tag all of your lab components.
- Invoke terraform.
Attention!
If Big-IQ License Manager has been configured previously, use the provided address. Otherwise cut and paste from below.
terraform plan -var bigiqLicenseManager="null"
terraform apply -var bigiqLicenseManager="null"
When 'terraform apply' completes, note the **aws_alias** and vpc-id values. Open up your **aws_alias** link in a browser and login to the AWS console with the email address and password you created during the install. You can always get these values by invoking terraform output with the variable name:
terraform output **aws_alias**
terraform output vpc-id
Note
"But what if I forgot my password!"
cat ./passwd
- Use the alias aws console link, email address and password you created earlier to login to the aws console. Navigate to Services => Networking & Content Delivery => VPC. Click on # VPCs. In the search field type your email address or last three digits of your vpc-id. You should see your VPC details.
- Services => Compute => EC2 => Resources => # Running Instances. In the search field enter your email address. You should see your newly created instance running.
- Once your instances are green and ELB is up and running you can test with the command:
curl `terraform output elb_dns_name`
...and see a reply 'Hello, World'
Note
The students will not have to complete this task. The Big-IQ License Manager need only be created once with enough pool licenses to accommodate the class.
Important
This version of the lab will only work on the shared Field Sales Engineers account while we test. For authenticating Big-IP virtual instances to Big-IQ License Manager, the CloudFormation templates rely on a passwd file in an S3 bucket. The buckets are not public and not accessible outside of our shared AWS account. If you want to edit this to work on a different AWS account:
- Create a passwd text file (no extension) and upload to your own S3 bucket.
- Edit f5-cloudformation-cross-az-ha-bigiq.tf.dormant. Look for:
bigiqPasswordS3Arn =
...and change the arn to reflect the new arn of your passwd file.
- SSH into the Big-IQ License Manager.
ssh -i ./MyKeyPair-[email address].pem admin@`terraform output aws_instance.bigiq.public_ip`
...so if you created an account with t.marfil@f5.io:
ssh -i ./MyKeyPair-t.marfil@f5.io.pem admin@`terraform output aws_instance.bigiq.public_ip`
...autocomplete should be even quicker: ssh -i ./M <Tab> will autocomplete with the correct key name.
- From Big-IQ tmsh, create an admin password so we can later login to the configuration utility and use the SOAP client to license Big-IQ with F5-BIQ-VE-MAX-LIC license.
modify auth user admin password mylabpass
save sys config
/usr/local/bin/SOAPLicenseClient --verbose --basekey XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
- Note the terraform output value for aws_instance.bigiq.public_ip. HTTPS to this IP address from the browser and apply one or more F5-BIG-VEP3-25M-4V13-LIC pool licenses.
terraform output aws_instance.bigiq.public_ip
When you login to Big-IQ via the configuration utility (web ui), you will have to rename bigiq1 => bigiq1.local to get past the Management Address screen and make sure to configure ntp with pool.ntp.org. Click next past the password screen without making any changes. Aside from the aforementioned, click next, next, next... and accept all defaults.
Navigate to Big-IQ Device Manager => Operations => License Management => Licenses. Click on New License. Apply the F5-BIG-VEP3-25M-4V13-LIC pool license registration key.
License Name: bigiqLicPool
Activation Method: Automatic
Click Activate.
Accept the EULA.
Wait for Status: * Active.
We are going to launch two cloud formation templates simultaneously.
- Auto scaling the BIG-IP VE Web Application Firewall in AWS:
https://github.com/F5Networks/f5-aws-cloudformation/tree/master/supported/solutions/autoscale/waf/
- ...and the experimental version of "Deploying the BIG-IP in AWS - Clustered 2-NIC across Availability Zones" which supports automatic Big-IQ Licensing:
https://github.com/F5Networks/f5-aws-cloudformation/tree/master/supported/cluster/2nic/across-az-ha
- Let's wake-up the F5 cloud formation templates that have been laying dormant! From the f5-super-netops container shell:
mv f5-cloudformation-autoscale-waf.tf.dormant f5-cloudformation-autoscale-waf.tf
mv f5-cloudformation-cross-az-ha-bigiq.tf.dormant f5-cloudformation-cross-az-ha-bigiq.tf
terraform plan -var bigiqLicenseManager=`terraform output aws_instance.bigiq.public_ip`
terraform apply -var bigiqLicenseManager=`terraform output aws_instance.bigiq.public_ip`
- Track things are going well in the AWS management console: Services => Management Tools => CloudFormation template. When done, both of your deployed CloudFormation templates will be Status: CREATE_COMPLETE. We still have to wait ~20 minutes for our environment to be ready.
- From the f5-super-netops terminal, When done you should see a message like the one below.
Outputs:
bigipExternalSecurityGroup = sg-xxxxxxxx
bigipManagementSecurityGroup = sg-xxxxxxxx
elb_dns_name = terraform-asg-example-xxxxxxxxx.us-east-1.elb.amazonaws.com
...
...
Terraform has successfully done its job, but we still must wait for instances to spin up. Log back in to the AWS console to track status of the new instances. This can take up to 20 minutes.
20 minutes later...
- Find the public IP management address of the three BigIP instances that we created. Let's confirm they're up.
ssh -i ./MyKeyPair-[email address].pem admin@[public ip address or DNS name of autoscale waf bigip]
- Verify the auto-scale WAF is up and the virtual server is up.
modify auth user admin password [mylabpass]
save sys config
show ltm virtual-address
- Login to the AWS Console and find the DNS name of the WAF autoscale load balancer. Services => EC2 => Load Balancers. Filter with your email address. Under the Description tab beneath look for the *DNS name.
- From the f5-super-netops container test our https service is up:
curl -k https://waf-x-x.us-east-1.elb.amazonaws.com where waf-x-x is the dns name you noted in the AWS console.
Hello, World
ssh -i ./MyKeyPair-[email address].pem admin@[public ip address of primary cross-az hav bigip]
modify auth user admin password mylabpass
save sys config
- Navigate to AWS Console -> Services -> EC2 -> Running Instances. Note the IPv4 Public IP addresses for the two instances named: "Big-IP: f5-cluster"
- Highlight the primary Big-IP : f5-cluster. In the Description tab, note the first assigned Elastic IP, this is the public management IP address. Note the Secondary private IP. This is the IP to be assigned to the virtual server we will soon configure.
- Highlight the second Big-IP : f5-cluster. In the Description tab, note the first assigned Elastic IP, this is the public management IP address. note the Secondary private IP. This is the IP to be assigned to the virtual server we will soon configure.
- Use MyKeyPair-[email address].pem generated previously to ssh to the management IP address of the BigIPs noted in steps 3 and 4 above.
- Create an admin password so you can login to the configuration utility (web ui).
modify auth user admin shell bash
save sys config
Login to the active BigIP configuration utility (web ui).
The "HA_Across_AZs" iApp will already be deployed in the Common partition.
Download the latest iApp package from https://downloads.f5.com. I tested with iapps-1.0.0.455.0.zip.
Extract iapps-1.0.0.455.0TCPRelease_Candidatesf5.tcp.v1.0.0rc2.tmpl. This is the tested version of the iApp.
Import f5.tcp.v1.0.0rc2.tmpl to the primary BigIP. The secondary BigIP should pick up the configuration change automatically.
Deploy an iApp using the f5.tcp.v1.0.0rc2.tmpl template.
Configure iApp:
Traffic Group: UNCHECK "Inherit traffic group from current partition / path"
Name: vs1
High Availability. What IP address do you want to use for the virtual server? Secondary private IP address of the first BigIP.
Note
The preconfigured HA_Across_AZs iApp has both IP addresses for the virtual servers prepopulated. The virtual server IP address configured here must match the virtual server IP address configured in the HA_Across_AZs iApp.
What is the associated service port? HTTP(80)
What IP address do you wish to use for the TCP virtual server in the other data center or availability zone? Secondary private IP address of the second BigIP.
Note
The preconfigured HA_Across_AZs iApp has both IP addresses for the virtual servers prepopulated. The virtual server IP address configured here must match the virtual server IP address configured in the HA_Across_AZs iApp.
Which servers are part of this pool? Private IP address of web-az1.0 and web-az2.0. Port: 80
Finished!
- Login to the standby BigIP configuration utility (web ui) and confirm the changes are in sync.
- Confirm the virtual server is up!
curl http://52.205.85.86
StatusCode : 200
StatusDescription : OK
Content : Hello, World
...
Stop the active BigIP instance in AZ1 via the AWS console and the elastic IP will 'float' over to the second BigIP.
Task 7 - Application Services iApp, Service Discovery iApp, and Ansible! Deploy http virtual server with iRule for 0-day attack.
- coming soon
- Deploy the Service Discovery iApp and use tags to automatically create and populate F5 BigIP pools.
- Deploy the previous task's iApp programmatically via Ansible.
- Deploy http virtual server with iRule for 0-day attack with Application Services iApp.
- coming soon
- AWS Console -> Services -> Storage -> S3. Filter for your S3 buckets. My test email is t.marfil@f5.io so I filter on 'marfil'. Delete your two S3 buckets prefaced with ha- and waf-.
- AWS Console => Services => Compute => EC2. Auto Scaling Groups. Filter on your email address. Same style filter as S3, no special characters. I filter on 'marfil'.
- Click on 'Instances' tab below. Select your Instance. Actions => Instance Protection => Remove Scale In Protection.
- From the f5-super-netops terminal:
terraform destroy
- After destroy completes, remove MyKeyPair-[email address]. From the AWS Console -> Services -> NETWORK & SECURITY -> Key Pairs -> Delete MyKeyPair-[email address].
- Remove User. From the AWS Console -> Services -> Security, Identity & Compliance -> IAM -> Users. Filter by email address. Delete user.
Note
Many thanks to Yevgeniy Brikman for his excellent Terraform: Up and Running: Writing Infrastructure as Code 1st Edition that helped me get started. http://shop.oreilly.com/product/0636920061939.do