Skip to content

Commit

Permalink
Merge pull request #95 from spcl/dev
Browse files Browse the repository at this point in the history
Full OpenWhisk support.
  • Loading branch information
mcopik authored May 30, 2022
2 parents 9dcbcc9 + 4c02784 commit 3ac761a
Show file tree
Hide file tree
Showing 113 changed files with 3,446 additions and 882 deletions.
12 changes: 8 additions & 4 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
version: 2.1

orbs:
python: circleci/python@0.2.1
python: circleci/python@1.4.0

jobs:
linting:
Expand All @@ -12,7 +12,11 @@ jobs:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
command: |
python3 install.py --aws --azure --gcp --dont-rebuild-docker-images --no-local
sudo apt update && sudo apt install libcurl4-openssl-dev
name: Install curl-config from Ubuntu APT
- run:
command: |
python3 install.py --aws --azure --gcp --no-local
name: Install pip dependencies
- run:
command: |
Expand Down Expand Up @@ -40,8 +44,8 @@ jobs:
then
ls $HOME/docker/*.tar.gz | xargs -I {file} sh -c "zcat {file} | docker load";
else
docker pull mcopik/serverless-benchmarks:build.aws.python.3.6
docker pull mcopik/serverless-benchmarks:build.aws.nodejs.10.x
docker pull mcopik/serverless-benchmarks:build.aws.python.3.7
docker pull mcopik/serverless-benchmarks:build.aws.nodejs.12.x
fi
name: Load Docker images
- run:
Expand Down
1 change: 1 addition & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ config
cache
python-venv
regression-*
*_code
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -170,3 +170,7 @@ dmypy.json
sebs-*
# cache
cache

# IntelliJ IDEA files
.idea
*.iml
6 changes: 6 additions & 0 deletions .mypy.ini
Original file line number Diff line number Diff line change
Expand Up @@ -30,5 +30,11 @@ ignore_missing_imports = True
[mypy-google.api_core]
ignore_missing_imports = True

[mypy-googleapiclient.discovery]
ignore_missing_imports = True

[mypy-googleapiclient.errors]
ignore_missing_imports = True

[mypy-testtools]
ignore_missing_imports = True
227 changes: 50 additions & 177 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,56 @@
# SeBS: Serverless Benchmark Suite

**FaaS benchmarking suite for serverless functions with automatic build, deployment, and measurements.**

[![CircleCI](https://circleci.com/gh/spcl/serverless-benchmarks.svg?style=shield)](https://circleci.com/gh/spcl/serverless-benchmarks)
![Release](https://img.shields.io/github/v/release/spcl/serverless-benchmarks)
![License](https://img.shields.io/github/license/spcl/serverless-benchmarks)
![GitHub issues](https://img.shields.io/github/issues/spcl/serverless-benchmarks)
![GitHub pull requests](https://img.shields.io/github/issues-pr/spcl/serverless-benchmarks)

SeBS is a diverse suite of FaaS benchmarks that allows an automatic performance analysis of
# SeBS: Serverless Benchmark Suite

**FaaS benchmarking suite for serverless functions with automatic build, deployment, and measurements.**

![Overview of SeBS features and components.](docs/overview.png)

SeBS is a diverse suite of FaaS benchmarks that allows automatic performance analysis of
commercial and open-source serverless platforms. We provide a suite of
[benchmark applications](#benchmark-applications) and [experiments](#experiments),
[benchmark applications](#benchmark-applications) and [experiments](#experiments)
and use them to test and evaluate different components of FaaS systems.
See the [installation instructions](#installation) to learn how to configure SeBS to use selected
cloud services and [usage instructions](#usage) to automatically launch experiments in the cloud!

SeBS provides support for automatic deployment and invocation of benchmarks on
AWS Lambda, Azure Functions, Google Cloud Functions, and a custom, Docker-based local
evaluation platform. See the [documentation on cloud providers](docs/platforms.md)
to learn how to provide SeBS with cloud credentials.

SeBS provides support for **automatic deployment** and invocation of benchmarks on
commercial and black-box platforms
[AWS Lambda](https://aws.amazon.com/lambda/),
[Azure Functions](https://azure.microsoft.com/en-us/services/functions/),
and [Google Cloud Functions](https://cloud.google.com/functions).
Furthermore, we support the open-source platform [OpenWhisk](https://openwhisk.apache.org/)
and offer a custom, Docker-based local evaluation platform.
See the [documentation on cloud providers](docs/platforms.md)
for details on configuring each platform in SeBS.
The documentation describes in detail [the design and implementation of our
tool](docs/design.md), and see the [modularity](docs/modularity.md)
section to learn how SeBS can be extended with new platforms, benchmarks, and experiments.
Find out more about our project in [a paper summary](mcopik.github.io/projects/sebs/).

Do you have further questions not answered by our documentation?
Did you encounter troubles with installing and using SeBS?
Or do you want to use SeBS in your work and you need new features?
Feel free to reach us through GitHub issues or by writing to <marcin.copik@inf.ethz.ch>.

SeBS can be used with our Docker image `spcleth/serverless-benchmarks:latest`, or the tool
can be [installed locally](#installation).

### Paper
For more information on how to configure, use and extend SeBS, see our
documentation:

* [How to use SeBS?](docs/usage.md)
* [Which benchmark applications are offered?](docs/benchmarks.md)
* [Which experiments can be launched to evaluate FaaS platforms?](docs/experiment.md)
* [How to configure serverless platforms?](docs/platforms.md)
* [How SeBS builds and deploys functions?](docs/build.md)
* [How SeBS package is designed?](docs/design.md)
* [How to extend SeBS with new benchmarks, experiments, and platforms?](docs/modularity.md)

### Publication

When using SeBS, please cite our [Middleware '21 paper](https://dl.acm.org/doi/abs/10.1145/3464298.3476133).
An extended version of our paper is [available on arXiv](https://arxiv.org/abs/2012.14132), and you can
Expand All @@ -35,39 +59,28 @@ You can cite our software repository as well, using the citation button on the r

```
@inproceedings{copik2021sebs,
author={Marcin Copik and Grzegorz Kwasniewski and Maciej Besta and Michal Podstawski and Torsten Hoefler},
title={SeBS: A Serverless Benchmark Suite for Function-as-a-Service Computing},
author = {Copik, Marcin and Kwasniewski, Grzegorz and Besta, Maciej and Podstawski, Michal and Hoefler, Torsten},
title = {SeBS: A Serverless Benchmark Suite for Function-as-a-Service Computing},
year = {2021},
isbn = {9781450385343},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3464298.3476133},
doi = {10.1145/3464298.3476133},
booktitle = {Proceedings of the 22nd International Middleware Conference},
pages = {64–78},
numpages = {15},
keywords = {benchmark, serverless, FaaS, function-as-a-service},
location = {Qu\'{e}bec city, Canada},
series = {Middleware '21}
}
```

## Benchmark Applications

For details on benchmark selection and their characterization, please refer to [our paper](#paper).

| Type | Benchmark | Languages | Description |
| :--- | :---: | :---: | :---: |
| Webapps | 110.dynamic-html | Python, Node.js | Generate dynamic HTML from a template. |
| Webapps | 120.uploader | Python, Node.js | Uploader file from provided URL to cloud storage. |
| Multimedia | 210.thumbnailer | Python, Node.js | Generate a thumbnail of an image. |
| Multimedia | 220.video-processing | Python | Add a watermark and generate gif of a video file. |
| Utilities | 311.compression | Python | Create a .zip file for a group of files in storage and return to user to download. |
| Utilities | 504.dna-visualization | Python | Creates a visualization data for DNA sequence. |
| Inference | 411.image-recognition | Python | Image recognition with ResNet and pytorch. |
| Scientific | 501.graph-pagerank | Python | PageRank implementation with igraph. |
| Scientific | 501.graph-mst | Python | Minimum spanning tree (MST) implementation with igraph. |
| Scientific | 501.graph-bfs | Python | Breadth-first search (BFS) implementation with igraph. |

## Installation

Requirements:
- Docker (at least 19)
- Python 3.6+ with:
- Python 3.7+ with:
- pip
- venv
- `libcurl` and its headers must be available on your system to install `pycurl`
Expand All @@ -78,7 +91,7 @@ Requirements:
To install the benchmarks with a support for all platforms, use:

```
./install.py --aws --azure --gcp --local
./install.py --aws --azure --gcp --openwhisk --local
```

It will create a virtual environment in `python-virtualenv`, install necessary Python
Expand All @@ -92,153 +105,12 @@ virtual environment:
Now you can deploy serverless experiments :-)

The installation of additional platforms is controlled with the `--platform` and `--no-platform`
switches. Currently, the default behavior for `install.py` is to install only the local
environment.
switches. Currently, the default behavior for `install.py` is to install only the
local environment.

**Make sure** that your Docker daemon is running and your user has sufficient permissions to use it. Otherwise you might see a lot of "Connection refused" and "Permission denied" errors when using SeBS.

To verify the correctness of installation, you can use [our regression testing](#regression).

## Usage

SeBS has three basic commands: `benchmark`, `experiment`, and `local`.
For each command you can pass `--verbose` flag to increase the verbosity of the output.
By default, all scripts will create a cache in directory `cache` to store code with
dependencies and information on allocated cloud resources.
Benchmarks will be rebuilt after a change in source code is detected.
To enforce redeployment of code and benchmark input please use flags `--update-code`
and `--update-storage`, respectively.
**Note:** the cache does not support updating cloud region. If you want to deploy benchmarks
to a new cloud region, then use a new cache directory.

### Benchmark

This command is used to build, deploy, and execute serverless benchmark in cloud.
The example below invokes the benchmark `110.dynamic-html` on AWS via the standard HTTP trigger.

```
./sebs.py benchmark invoke 110.dynamic-html test --config config/example.json --deployment aws --verbose
```

To configure your benchmark, change settings in the config file or use command-line options.
The full list is available by running `./sebs.py benchmark invoke --help`.

### Regression

Additionally, we provide a regression option to execute all benchmarks on a given platform.
The example below demonstrates how to run the regression suite with `test` input size on AWS.

```
./sebs.py benchmark regression test --config config/example.json --deployment aws
```

The regression can be executed on a single benchmark as well:

```
./sebs.py benchmark regression test --config config/example.json --deployment aws --benchmark-name 120.uploader
```

### Experiment

This command is used to execute benchmarks described in the paper. The example below runs the experiment **perf-cost**:

```
./sebs.py experiment invoke perf-cost --config config/example.json --deployment aws
```

The configuration specifies that benchmark **110.dynamic-html** is executed 50 times, with 50 concurrent invocations, and both cold and warm invocations are recorded.

```json
"perf-cost": {
"benchmark": "110.dynamic-html",
"experiments": ["cold", "warm"],
"input-size": "test",
"repetitions": 50,
"concurrent-invocations": 50,
"memory-sizes": [128, 256]
}
```

To download cloud metrics and process the invocations into a .csv file with data, run the process construct

```
./sebs.py experiment process perf-cost --config example.json --deployment aws
```

### Local

In addition to the cloud deployment, we provide an opportunity to launch benchmarks locally with the help of [minio](https://min.io/) storage.
This allows us to conduct debugging and a local characterization of the benchmarks.

To launch Docker containers, use the following command - this example launches benchmark `110.dynamic-html` with size `test`:

```
./sebs.py local start 110.dynamic-html test out.json --config config/example.json --deployments 1
```

The output file `out.json` will contain the information on containers deployed and the endpoints that can be used to invoke functions:

```
{
"functions": [
{
"benchmark": "110.dynamic-html",
"hash": "5ff0657337d17b0cf6156f712f697610",
"instance_id": "e4797ae01c52ac54bfc22aece1e413130806165eea58c544b2a15c740ec7d75f",
"name": "110.dynamic-html-python-128",
"port": 9000,
"triggers": [],
"url": "172.17.0.3:9000"
}
],
"inputs": [
{
"random_len": 10,
"username": "testname"
}
]
}
```

In our example, we can use `curl` to invoke the function with provided input:

```
curl 172.17.0.3:9000 --request POST --data '{"random_len": 10,"username": "testname"}' --header 'Content-Type: application/json'
```

To stop containers, you can use the following command:

```
./sebs.py local stop out.json
```

The stopped containers won't be automatically removed unless the option `--remove-containers` has been passed to the `start` command.

## Experiments

For details on experiments and methodology, please refer to [our paper](#paper).

#### Performance & cost

Invokes given benchmark a selected number of times, measuring the time and cost of invocations.
Supports `cold` and `warm` invocations with a selected number of concurrent invocations.
In addition, to accurately measure the overheads of Azure Function Apps, we offer `burst` and `sequential` invocation type that doesn't distinguish
between cold and warm startups.

#### Network ping-pong

Measures the distribution of network latency between benchmark driver and function instance.

#### Invocation overhead

The experiment performs the clock drift synchronization protocol to accurately measure the startup time of a function by comparing
benchmark driver and function timestamps.

#### Eviction model

Executes test functions multiple times, with varying size, memory and runtime configurations, to test for how long function instances stay alive.
The result helps to estimate the analytical models describing cold startups.
Currently supported only on AWS.
To verify the correctness of installation, you can use [our regression testing](docs/usage.md#regression).

## Authors

Expand All @@ -247,4 +119,5 @@ Currently supported only on AWS.
* [Nico Graf (ETH Zurich)](https://github.com/ncograf/) - contributed implementation of regression tests, bugfixes, and helped with testing and documentation.
* [Kacper Janda](https://github.com/Kacpro), [Mateusz Knapik](https://github.com/maknapik), [JmmCz](https://github.com/JmmCz), AGH University of Science and Technology - contributed together Google Cloud support.
* [Grzegorz Kwaśniewski (ETH Zurich)](https://github.com/gkwasniewski) - worked on the modeling experiments.
* [Paweł Żuk (University of Warsaw)](https://github.com/pmzuk) - contributed OpenWhisk support.

9 changes: 9 additions & 0 deletions benchmarks/000.microbenchmarks/010.sleep/nodejs/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"name": "",
"version": "1.0.0",
"description": "",
"author": "",
"license": "",
"dependencies": {
}
}
Original file line number Diff line number Diff line change
@@ -1 +1 @@
jinja2==2.10.3
jinja2>=2.10.3
3 changes: 1 addition & 2 deletions benchmarks/100.webapps/120.uploader/nodejs/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
"description": "",
"author": "",
"license": "",
"dependencies": {},
"devDependencies": {
"dependencies": {
"request": "^2.88.0"
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@
"author": "",
"license": "",
"dependencies": {
"sharp": "^0.23.4"
"sharp": "^0.25"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Pillow==9.0.0
1 change: 1 addition & 0 deletions benchmarks/200.multimedia/220.video-processing/init.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ pushd ${DIR} > /dev/null
tar -xf ffmpeg-release-amd64-static.tar.xz
rm *.tar.xz
mv ffmpeg-* ffmpeg
rm ffmpeg/ffprobe
popd > /dev/null

# copy watermark
Expand Down
Empty file.
Loading

0 comments on commit 3ac761a

Please sign in to comment.