Skip to content

Commit

Permalink
[NEW] NST v1.1.0 - Documentation Website
Browse files Browse the repository at this point in the history
  • Loading branch information
boromir674 committed Nov 15, 2023
2 parents fb6ec26 + f1d9e22 commit 42bd268
Show file tree
Hide file tree
Showing 20 changed files with 1,846 additions and 756 deletions.
403 changes: 111 additions & 292 deletions .github/workflows/test.yaml

Large diffs are not rendered by default.

49 changes: 49 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.8"
# You can also specify other tool versions:
# nodejs: "19"
# rust: "1.64"
# golang: "1.19"

# ALL JOBS implied: https://docs.readthedocs.io/en/stable/builds.html
jobs:
# post_system_dependencies:
# - python3 -m pip install --user poetry
pre_install:
- python --version
# generate compatible and pinned dependencies in pip format, for python3.8
- python -m pip install poetry
- python -m poetry export -o req-docs.txt -E docs
post_install:
- python -m pip install -e .
pre_build:
- python ./scripts/visualize-ga-workflow.py > ./docs/cicd_mermaid.md
- python ./scripts/visualize-dockerfile.py > ./docs/dockerfile_mermaid.md


# Build documentation in the "docs/" directory with mkdocs
mkdocs:
configuration: mkdocs.yml

# Extra format only support by Shpinx
# formats:
# - epub
# - pdf


# Optional but recommended, declare the Python requirements required
# to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: req-docs.txt
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,27 @@
# Changelog


## 1.1.0 (2023-11-15)

### Changes

##### feature
- include wheel into the python distribution for PyPI

##### documentation
- add Doc Pages content and use material theme
- document cicd pipeline, by visualizing the Github Actoins Workflow as a graph
- automatically create nav tree of API refs, from discovered docstrings in *.py
- update README

##### build
- default docker build stage includes vgg and demo cmd

##### ci
- call reusable workflow to handle PyPI Publish Job
- run Docker Job from reusable workflow


## 1.0.1 (2023-11-05)

**CI Docker Behaviour**
Expand Down
11 changes: 8 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,7 @@ ENV PATH="/root/.local/bin:$PATH"
# END of POC

## Build Time Args ##
# not available at run time, like ENV vars

ARG VGG_NAME="imagenet-vgg-verydeep-19.mat"

Expand Down Expand Up @@ -157,8 +158,8 @@ FROM prod_ready AS prod_demo

ARG REPO_DEMO_IMAGES_LOCATION="tests/data"

ARG DEMO_CONTENT_IMAGE="${REPO_DEMO_IMAGES_LOCATION}/blue-red_w300-h225.jpg"
ARG DEMO_STYLE_IMAGE="${REPO_DEMO_IMAGES_LOCATION}/canoe_water_w300-h225.jpg"
ARG DEMO_CONTENT_IMAGE="${REPO_DEMO_IMAGES_LOCATION}/canoe_water_w300-h225.jpg"
ARG DEMO_STYLE_IMAGE="${REPO_DEMO_IMAGES_LOCATION}/blue-red_w300-h225.jpg"

WORKDIR /app

Expand All @@ -178,10 +179,14 @@ ENV STYLE_IMAGE_DEMO="/app/${DEMO_STYLE_IMAGE}"
CMD ["nst", "demo"]


FROM prod_ready as default

CMD [ "nst" ]

### Stage: Default Target (for Production)
# Just to allow `docker buil` use this as target if not specified

FROM prod_ready as default
FROM prod_demo as default_with_demo

# Define ENTRYPOINT, so that this is the default
# runs the NST Algorithm on the Demo Content and Style Images for a few iterations
Expand Down
106 changes: 31 additions & 75 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ This Python package runs a Neural Style Tranfer algorithm on input `content` and
.. list-table::
:stub-columns: 1

* - tests
- | |ci_pipeline| |codecov|
* - build
- | |ci_pipeline| |docs| |codecov|

* - package
- | |pypi| |wheel| |py_versions| |commits_since|
Expand All @@ -34,7 +34,7 @@ This Python package runs a Neural Style Tranfer algorithm on input `content` and
* - code quality
- |codacy| |code_climate| |maintainability| |scrutinizer|


| **Docs**: https://boromir674.github.io/neural-style-transfer/
Overview
========
Expand All @@ -50,112 +50,67 @@ Key features of the package:
* Persisting of generated images


Quick-start
-----------

Installation
------------
| The Neural Style Transfer - CLI heavely depends on Tensorflow (tf) and therefore it is crucial that tf is installed correctly in your Python environment.
Sample commands to install the NST CLI from source, using a terminal:
| Run a demo NST, on sample `Content` and `Style` Images:
::

# Get the Code
git clone https://github.com/boromir674/neural-style-transfer.git
cd neural-style-transfer

# Activate a python virtual environment
virtualenv env --python=python3
source env/bin/activate

# Install dependencies
pip install -r requirements/dex.txt
mkdir art
export NST_HOST_MOUNT="$PWD/art"

# Install NST CLI (in virtual environment)
pip install -e .
docker-compose up

# Process runs, in containerized environment, and exits.

Alternative command to install the NST CLI by downloading the `artificial_artwork` python package from pypi:

::

pip install artificial_artwork


Make the cli available for your host system:
| Check out your **Generated Image**!
| Artificial Artwork: **art/canoe_water_w300-h225.jpg+blue-red_w300-h225.jpg-100.png**
::

# Setup a symbolic link (in your host system) in a location in your PATH
# Assuming ~/.local/bin is in your PATH
ln -s $PWD/env/bin/neural-style-transfer ~/.local/bin/neural-style-transfer

# Deactivate environment since the symbolic link is available in "global scope" by now
deactivate
xdg-open art/canoe_water_w300-h225.jpg+blue-red_w300-h225.jpg-100.png


Usage
-----

Download the Vgg-Verydeep-19 pretrained `model` from https://drive.protonmail.com/urls/7RXGN23ZRR#hsw4STil0Hgc.

Exctract the model (weights and layer architecture).

For example use `tar -xvf imagenet-vgg-verydeep-19.tar` to extract in the current directory.

Indicate to the program where to find the model:
Run the `nst` CLI with the `--help` option to see the available options.

::

export AA_VGG_19=$PWD/imagenet-vgg-verydeep-19.mat
docker run boromir674/neural-style-transfer:1.0.2 --help

We have included one 'content' and one 'style' image in the source repository, to facilitate testing.
You can use these images to quickly try running the program.

For example, you can get the code with `git clone git@github.com:boromir674/neural-style-transfer.git`,
then `cd neural-style-transfer`.
Development
-----------

Assuming you have installed using a symbolic link in your PATH (as shown above), or if you are still
operating withing your virtual environment, then you can create artificial artwork with the following command.
Installation
""""""""""""

The algorithm will apply the style to the content iteratively.
It will iterate 100 times.
Install `nst` CLI and `artificial_artwork` python package from `pypi``:

::

# Create a directory where to store the artificial artwork
mkdir nst_output

# Run a Neural Style Algorithm for 100 iterations and store output to nst_output directory
neural-style-transfer tests/data/canoe_water.jpg tests/data/blue-red-w400-h300.jpg --location nst_output


Note we are using as 'content' and 'style' images jpg files included in the distribution (artificial-artwork package).
We are using a photo of a canoe on water and an abstract painting with prevalence of blue and red color shades.

Also note that to demonstrate quicker, both images have been already resized to just 400 pixels of width and 300 of height each.

Navigating to `nst_output` you can find multiple image files generated from running the algorithm. Each file corresponds to the
image generated on a different iteration while running the algorithm. The bigger the iteration the more "style" has been applied.
pip install artificial_artwork

Check out your artificial artwork!

Only python3.8 wheel is included atm.

Docker image
------------

We have included a docker file that we use to build an image where both the `artificial_artwork` package (source code)
and the pretrained model are present. That way you can immediately start creating artwork!
Sample commands to install the NST CLI from source, using a terminal:

::

docker pull boromir674/neural-style-transfer
git clone https://github.com/boromir674/neural-style-transfer.git
pip install ./neural-style-transfer

export NST_OUTPUT=/home/$USER/nst-output

CONTENT=/path/to/content-image.jpg
STYLE=/path/to/style-image.jpg
| The Neural Style Transfer - CLI heavely depends on Tensorflow (tf) and therefore it is crucial that tf is installed correctly in your Python environment.

docker run -it --rm -v $NST_OUTPUT:/nst-output boromir674/neural-style-transfer $STYLE $CONTENT --iteratins 200 --location /nst-output

.. |ci_pipeline| image:: https://img.shields.io/github/actions/workflow/status/boromir674/neural-style-transfer/test.yaml?branch=master&label=build&logo=github-actions&logoColor=233392FF
:alt: CI Pipeline Status
Expand Down Expand Up @@ -214,8 +169,9 @@ and the pretrained model are present. That way you can immediately start creatin





.. |docs| image:: https://readthedocs.org/projects/neural-style-transfer/badge/?version=latest
:alt: Documentation Status
:target: https://neural-style-transfer.readthedocs.io/en/latest/?badge=latest


.. |docker| image:: https://img.shields.io/docker/v/boromir674/neural-style-transfer/latest?logo=docker&logoColor=%23849ED9
Expand Down
22 changes: 16 additions & 6 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,13 @@ version: '3'

services:

# mkdir art
# NST_HOST_MOUNT="$PWD/art" docker-compose up
nst_demo:
build:
context: ./
target: prod_demo
dockerfile: Dockerfile

# NST_HOST_MOUNT=/custom/nst-output docker-compose up
image: boromir674/neural-style-transfer:1.0.2
volumes:
- ${NST_HOST_MOUNT:-./nst-algo-output}:/app/demo-output
command: [ "demo" ]


## NST Service, where the Pretrained Model Weights are made available to it (so that
Expand Down Expand Up @@ -61,3 +59,15 @@ services:
# AA_VGG_19: pretrained_model_bundle/imagenet-vgg-verydeep-19.mat
# volumes:
# - ./pretrained_model_bundle:/app/pretrained_model_bundle


# mkdir art && NST_HOST_MOUNT="$PWD/art" docker-compose up
# nst_demo_dev:
# build:
# context: ./
# # target: default_with_demo
# dockerfile: Dockerfile
# # OR just build: .

# volumes:
# - ${NST_HOST_MOUNT:-./nst-algo-output}:/app/demo-output
16 changes: 2 additions & 14 deletions docs/build-process_DAG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,8 @@

Flow Chart, of how exection navigates docker stages (see --target of docker build).

If you run `docker build .` the `target` used by default is the `default` Stage in the Graph.
If you run `docker build .` the `target` used by default is the `default_with_demo` Stage in the Graph.

**Dockerfile: ./Dockerfile**

Note, The below Graph represents only `FROM first_stage AS second_stage`.
We will add representation for `COPY --from=other_stage .`, later.

```mermaid
graph TB;
python:3.8.12-slim-bullseye --> base
base --> source
source --> prod
base --> prod_install
prod_install --> prod_ready
prod_ready --> prod_demo
prod_ready --> default
```
{% include 'dockerfile_mermaid.md' %}
12 changes: 12 additions & 0 deletions docs/cicd.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
tags:
- CICD
---

## CICD Pipeline, as single Github Action Workflow

Flow Chart, of Jobs Dependencies in the Pipeline.

**config: ./.github/workflows/test.yaml**

{% include 'cicd_mermaid.md' %}
7 changes: 7 additions & 0 deletions docs/cli.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# CLI Reference

This page provides documentation for our command line tools.

::: mkdocs-click
:module: artificial_artwork.cli
:command: entry_point
Loading

0 comments on commit 42bd268

Please sign in to comment.