Skip to content

Commit

Permalink
I think for minikube build why bother with docker build at this rate …
Browse files Browse the repository at this point in the history
…and not just rely on DockerHub like with my AWS deployment? Also I want to poke around with labels to make poking around this stuf easier, like rather than get client name, just leveage the tag
  • Loading branch information
Andrew Quijano committed Jun 25, 2023
1 parent d717d0d commit 9b05181
Show file tree
Hide file tree
Showing 41 changed files with 72 additions and 638 deletions.
51 changes: 6 additions & 45 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,12 +91,6 @@ but feel free to modify the arguments that fit your computer's specs.
eval $(minikube docker-env)
#### Running Kubernetes Commands
After starting minikube you will need to build the necessary Docker image using
the docker build command. The resulting image needs to have a specific label,
ppdt:experiment. You can build this image using the following command.
docker build -t ppdt:experiment .
The next step is to deploy the level sites. The level sites need to be deployed
before any other portion of the system. This can be done by using the following
command.
Expand Down Expand Up @@ -182,43 +176,8 @@ If you want to re-build everything in the experiment, run the following
#### Installation
- First install [eksctl](https://eksctl.io/introduction/#installation)

- Create a user. Using Access analyzer, the customer inline policy needed is listed here:
* still undergoing more testing
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"ec2:AuthorizeSecurityGroupIngress",
"iam:CreateRole",
"iam:DeleteRole",
"cloudformation:*",
"ec2:RunInstances",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"ec2:DescribeSecurityGroups",
"ec2:AssociateRouteTable",
"iam:DetachRolePolicy",
"ec2:CreateLaunchTemplate",
"ec2:DescribeInstanceTypeOfferings",
"iam:DeleteRolePolicy",
"iam:ListAttachedRolePolicies",
"ec2:DescribeVpcs",
"ec2:CreateRoute",
"iam:GetOpenIDConnectProvider",
"ec2:DescribeSubnets",
"ec2:DescribeKeyPairs",
"iam:GetRolePolicy"
],
"Resource": "*"
}
]
}
```
- Create a user with sufficient permissions

- Obtain AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of the user account. [See the documentation provided here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)

- run `aws configure` to input the access id and credential.
Expand All @@ -243,10 +202,12 @@ aws eks update-kubeconfig --name ppdt --region us-east-2
```bash
# Make sure you aren't running these too early!
kubectl apply -f eks-config/k8/level_sites
kubectl apply -f eks-config/k8/server -l role=server
kubectl apply -f eks-config/k8/server

kubectl apply -f eks-config/k8/client -l role=client
kubectl apply -f eks-config/k8/client
kubectl exec <CLIENT-SITE-POD> -- bash -c "gradle run -PchooseRole=weka.finito.client --args <VALUES-FILE>"

kubectl exec ppdt-client-deploy-5795dcd946-bctkd -- bash -c "gradle run -PchooseRole=weka.finito.client --args /data/hypothyroid.values"
```
- Obtain the results of the classification using `kubectl logs` to the pods deployed on EKS.

Expand Down
2 changes: 1 addition & 1 deletion config.properties
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
level-site-ports = "9000,9001,9002,9003,9004,9005,9006,9007,9008,9009"
key_size = 1024
key_size = 2048
precision = 2
data_directory = data
server-port=10000
Expand Down
37 changes: 33 additions & 4 deletions eks-config/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,43 @@ metadata:
region: us-east-2
version: "1.27"

# Managed Node Groups show up on AWS console
# Node Groups show up on AWS console
# Label is necessary so I can target where the pods go with kubectl apply
# For best performance, I am isolating each level-site pod to its own node.
managedNodeGroups:
# Currently I have 10 level-sites
- name: level-sites
instanceType: t2.large
labels: { role: level-site }
instanceType: t2.medium
# Create 10 EC2 Instances, 1 pod per instance
minSize: 12
maxSize: 15
desiredCapacity: 12
maxPodsPerNode: 1

# Allow to communicate with other node groups.
# If you have multiple node groups you need this to be true
privateNetworking: true

# Information to tag this specific node group for tasks
labels: { role: level-site }
tags:
nodegroup-role: level-site

# You should only need 1 client to run evaluations
#- name: client
# labels: { role: client }
# instanceType: t2.medium
# minSize: 1
# maxSize: 1
# desiredCapacity: 1
# maxPodsPerNode: 1
# privateNetworking: true

# You should only need 1 server to run training job
#- name: server
# labels: { role: server }
# instanceType: t2.medium
# minSize: 1
# maxSize: 1
# desiredCapacity: 1
# maxPodsPerNode: 1
# privateNetworking: true
42 changes: 0 additions & 42 deletions eks-config/k8/client/client_deployment.yaml

This file was deleted.

12 changes: 0 additions & 12 deletions eks-config/k8/client/client_service.yaml

This file was deleted.

33 changes: 0 additions & 33 deletions eks-config/k8/level_sites/level_site_01_deployment.yaml

This file was deleted.

12 changes: 0 additions & 12 deletions eks-config/k8/level_sites/level_site_01_service.yaml

This file was deleted.

33 changes: 0 additions & 33 deletions eks-config/k8/level_sites/level_site_02_deployment.yaml

This file was deleted.

12 changes: 0 additions & 12 deletions eks-config/k8/level_sites/level_site_02_service.yaml

This file was deleted.

33 changes: 0 additions & 33 deletions eks-config/k8/level_sites/level_site_03_deployment.yaml

This file was deleted.

12 changes: 0 additions & 12 deletions eks-config/k8/level_sites/level_site_03_service.yaml

This file was deleted.

33 changes: 0 additions & 33 deletions eks-config/k8/level_sites/level_site_04_deployment.yaml

This file was deleted.

12 changes: 0 additions & 12 deletions eks-config/k8/level_sites/level_site_04_service.yaml

This file was deleted.

Loading

0 comments on commit 9b05181

Please sign in to comment.