Skip to content

Commit

Permalink
Merge pull request #4 from AndrewQuijano/k8-jobs
Browse files Browse the repository at this point in the history
Upgraded workflow
  • Loading branch information
AndrewQuijano authored Jun 21, 2023
2 parents d16a469 + a1a1496 commit 9926d32
Show file tree
Hide file tree
Showing 31 changed files with 587 additions and 314 deletions.
18 changes: 13 additions & 5 deletions .github/workflows/build-gradle-project.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,20 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout project sources
uses: actions/checkout@v2
- name: Setup Gradle
uses: gradle/gradle-build-action@v2
uses: actions/checkout@v3

- name: Install packages
run: sudo apt-get install -y graphviz
- name: Run check with Gradle Wrapper
run: bash gradlew build

- name: Setup Gradle
uses: actions/setup-java@v3
with:
distribution: 'oracle'
java-version: '17'
cache: 'gradle'
- run: sh gradlew build

- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ RUN mkdir /data
ADD . /code/

RUN mv /code/scripts/* /scripts/
RUN mv /code/data/* /data/
RUN mv /code/data/* /data/
RUN chmod +x /scripts/*
WORKDIR /code

Expand Down
67 changes: 39 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# MPC-PPDT
[![Build Gradle project](https://github.com/AndrewQuijano/MPC-PPDT/actions/workflows/build-gradle-project.yml/badge.svg)](https://github.com/AndrewQuijano/MPC-PPDT/actions/workflows/build-gradle-project.yml)
[![codecov](https://codecov.io/gh/AndrewQuijano/MPC-PPDT/branch/main/graph/badge.svg?token=eEtEvBZYu9)](https://codecov.io/gh/AndrewQuijano/MPC-PPDT)
Implementation of the PPDT in the paper "Privacy Preserving Decision Trees in a Multi-Party Setting: a Level-Based Approach"

## Libraries
Expand Down Expand Up @@ -61,12 +62,9 @@ drawing of what the DT looks like.
To make it easier for deploying on the cloud, we also provided a method to export our system into Kubernetes.
This would assume one execution rather than multiple executions.

#### Set Training and testing files
First, you need to edit the environment variables:
1. In the `client_deployment.yaml` file, you need to change the value of `VALUES` to point to the input vector to evaluate
2. In the `server_site_deployment.yaml` file, you need to change the value of the `TRAINING` to point to the file with the training data.
#### Set Training data set
In the `server_site_training_job.yaml` file, you need to change the first argument to point to the right ARFF file.

*To be updated with converting to jobs*
#### Creating a Kubernetes Secret
You should set up a Kubernetes secret file, called `ppdt-secrets.yaml` in the `k8/level-sites` folder.
In the yaml file, you will need to replace <SECRET_VALUE> with a random string encoded in Base64.
Expand Down Expand Up @@ -126,7 +124,7 @@ ppdt-level-site-10-deploy-67b7c5689b-rkl6r 1/1 Running 1 (2m39s ago)
```
It does take time for the level-site to be able to accept connections. Run the following command on a level-site,
and wait for an output in standard output saying `Ready to accept connections`. Set `<LEVEL-SITE-POD-NAME>`
and wait for an output in standard output saying `Ready to accept connections at: 9000`. Set `<LEVEL-SITE-POD-NAME>`
to one of the pod names from the output, e. g. `ppdt-level-site-01-deploy-7dbf5b4cdd-wz6q7`.
kubectl logs -f <LEVEL-SITE-POD-NAME>
Expand All @@ -136,27 +134,42 @@ start the server site. To do this, run the following command.
kubectl apply -f k8/server_site
To verify that the server site is finished running, use the following commands to confirm the server_site is _running_
and check the logs to confirm we see `Training Successful` for all the level-sites.
To verify that the server site is ready, use the following commands to confirm the server_site is _running_
and check the logs to confirm we see `Server-site ready to get public keys from client-site` so we can run the client.
kubectl get pods
kubectl logs -f <SERVER-SITE-POD-NAME>
After the server site has completed successfully we are ready to run the client.
After the server site is ready we are ready to run the client.
To run the client, simply run the following command.
To run a classification, you need to pass a command to the client too.
kubectl apply -f k8/client
kubectl exec <CLIENT-SITE-POD> -- bash -c "gradle run -PchooseRole=weka.finito.client --args <VALUES-FILE>"
To get results, all you need to do is print the stdout of each of the level_sites
and from the client. To do this, first get all the pods.
kubectl get pods
Then, for all level_sites and clients you can get the printout of stdout by
using the logs command for each pod.
kubectl logs <POD-NAME>
To get the results, access the logs as described in the previous steps for both the client and level-sites.
#### Re-running with different experiments
- *Case 1: Re-run with different testing set*
As the job created the pod, you would connect to the pod and run the modified gradle command with the other VALUES file.
```bash
kubectl exec <CLIENT-SITE-POD> -- bash -c "gradle run -PchooseRole=weka.finito.client --args <NEW-VALUES-FILE>"
```
- *Case 2: Train level-sites with new DT and new testing set*
You need to edit the `server_site_training_job.yaml` file to point to a new ARFF file.
```bash
# Delete job
kubectl delete -f k8/server-site
kubectl delete -f k8/client

# Re-apply the jobs
kubectl apply -f k8/server-site
# Wait a few seconds to for server-site to be ready to get the client key...
# Or just check the server-site being ready as shown in the previous section
kubectl apply -f k8/client
kubectl exec <CLIENT-SITE-POD> -- bash -c "gradle run -PchooseRole=weka.finito.client --args <VALUES-FILE>"
```
#### Clean up

If you want to re-build everything in the experiment, run the following
Expand All @@ -167,9 +180,9 @@ If you want to re-build everything in the experiment, run the following
### Running it on an EKS Cluster

#### Installation
1. First install [eksctl](https://eksctl.io/introduction/#installation)
- First install [eksctl](https://eksctl.io/introduction/#installation)

2. Create a user. Using Access analyzer, the customer inline policy needed is listed here:
- Create a user. Using Access analyzer, the customer inline policy needed is listed here:
* still undergoing more testing
```json
{
Expand Down Expand Up @@ -206,30 +219,28 @@ If you want to re-build everything in the experiment, run the following
]
}
```
- Obtain AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of the user account. [See the documentation provided here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)

3. Obtain AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of the user account. [See the documentation provided here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
- run `aws configure` to input the access id and credential.

4. run `aws configure` to input the access id and credential.

5. Run the following command to create the cluster
- Run the following command to create the cluster
```bash
eksctl create cluster --config-file eks-config/config.yaml
```

5. Confirm the EKS cluster exists using the following
- Confirm the EKS cluster exists using the following
```bash
eksctl get clusters --region us-east-2
```

#### Running the experiment
1. Once you confirm the cluster is created, you need to register the cluster with kubectl:
- Once you confirm the cluster is created, you need to register the cluster with kubectl:
```bash
aws eks update-kubeconfig --name ppdt --region us-east-2
```

2. Run the same commands as shown in [here](#running-kubernetes-commands)

3. Obtain the results of the classification using `kubectl logs` to the pods deployed on EKS.
- Run the same commands as shown in [here](#running-kubernetes-commands)
- Obtain the results of the classification using `kubectl logs` to the pods deployed on EKS.

#### Clean up
Destroy the EKS cluster using the following:
Expand Down
9 changes: 7 additions & 2 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,11 @@ repositories {
mavenCentral()
}

java {
sourceCompatibility = JavaVersion.VERSION_17
targetCompatibility = JavaVersion.VERSION_17
}

dependencies {
implementation 'junit:junit:4.13.1'
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.8.1'
Expand Down Expand Up @@ -42,7 +47,7 @@ gradle.projectsEvaluated {

jacocoTestReport {
reports {
xml.required=false
xml.required=true
html.required=true
}
}
Expand All @@ -51,4 +56,4 @@ check.dependsOn jacocoTestReport

application {
mainClass.set(project.findProperty("chooseRole").toString())
}
}
2 changes: 2 additions & 0 deletions config.properties
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,5 @@ level-site-ports = "9000,9001,9002,9003,9004,9005,9006,9007,9008,9009"
key_size = 1024
precision = 2
data_directory = data
server-port=10000
server-ip=127.0.0.1
8 changes: 2 additions & 6 deletions env.sh
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
#!/bin/bash
export PRECISION=2
export PPDT_DATA_DIR="./data/"
export PPDT_KEY_SIZE=1024
export PORT_NUM=9000
export LEVEL_SITE_DOMAINS="levelsite01,levelsite02,levelsite03,levelsite04,levelsite05,levelsite06,levelsite07,levelsite08,levelsite09,levelsite10"
export VALUES="hypothyroid.values"
export TRAINING="hypothyroid.arff"
export TREE_ROLE="SERVER"
export TIME_METHODS="false"
export LEVEL_SITE_DOMAINS="ppdt-level-site-01-service,ppdt-level-site-02-service,ppdt-level-site-03-service,ppdt-level-site-04-service,ppdt-level-site-05-service,ppdt-level-site-06-service,ppdt-level-site-07-service,ppdt-level-site-08-service,ppdt-level-site-09-service,ppdt-level-site-10-service"
export TREE_ROLE="SERVER"
20 changes: 10 additions & 10 deletions k8/client/client_deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,25 +18,25 @@ spec:
- name: ppdt-client-deploy
image: ppdt:experiment
ports:
- containerPort: 8000
- containerPort: 9000
env:
- name: TREE_ROLE
value: "CLIENT"
value: "CLIENT"

- name: PRECISION
value: "2"

- name: PPDT_DATA_DIR
value: "/data/"

- name: PPDT_KEY_SIZE
value: "1024"

- name: PORT_NUM
value: "9000"

- name: LEVEL_SITE_DOMAINS
value: "ppdt-level-site-01-service,ppdt-level-site-02-service,ppdt-level-site-03-service,ppdt-level-site-04-service,ppdt-level-site-05-service,ppdt-level-site-06-service,ppdt-level-site-07-service,ppdt-level-site-08-service,ppdt-level-site-09-service,ppdt-level-site-10-service"

- name: VALUES
value: "hypothyroid.values"
- name: PPDT_KEY_SIZE
value: "1024"

- name: SERVER
value: "ppdt-server-site-service"

- name: GRADLE_USER_HOME
value: "gradle_user_home"
7 changes: 3 additions & 4 deletions k8/client/client_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@ metadata:
name: ppdt-client-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-client-deploy
ports:
- protocol: TCP
port: 9000
targetPort: 9000
- protocol: TCP
port: 9000
targetPort: 9000
type: NodePort
1 change: 0 additions & 1 deletion k8/level_sites/level_site_01_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-01-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-01-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_02_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-02-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-02-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_03_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-03-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-03-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_04_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-04-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-04-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_05_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-05-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-05-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_06_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-06-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-06-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_07_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-07-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-07-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_08_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-08-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-08-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_09_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-09-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-09-deploy
ports:
- protocol: TCP
Expand Down
1 change: 0 additions & 1 deletion k8/level_sites/level_site_10_service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ metadata:
name: ppdt-level-site-10-service
spec:
selector:
#app: assignment3-django-deploy
pod: ppdt-level-site-10-deploy
ports:
- protocol: TCP
Expand Down
36 changes: 0 additions & 36 deletions k8/server_site/server_site_deployment.yaml

This file was deleted.

Loading

0 comments on commit 9926d32

Please sign in to comment.