Project started from this template
├── app
│ ├── database.py
│ ├── devtools.py
│ ├── html_text_variables.py
│ ├── __init__.py
│ ├── main.py
│ ├── ml_models
│ │ ├── bert_sentiment.py
│ │ ├── __init__.py
│ │ ├── nlp_utils.py
│ │ └── roberta_emotion.py
│ ├── models.py
│ ├── routers
│ │ ├── admin.py
│ │ ├── auth.py
│ │ ├── bert_sentiment.py
│ │ ├── roberta_emotion.py
│ │ └── users.py
│ ├── schemas.py
│ └── templates
│ └── home.html
├── __init__.py
├── LICENSE
├── readme
│ └── image_xx.png
├── README.md
├── requirements.txt
├── run.sh
├── run_tests.sh
├── sqlite3
│ ├── dev_db.db
│ └── test_db.db
└── test
├── __init__.py
├── test_admin.py
├── test_auth.py
├── test_main.py
├── test_ml_service_v1.py
├── test_ml_service_v2.py
├── test_users.py
└── utils.py
git clone https://github.com/MatthieuLeNozach/api_basemodel_for_machine_learning_with_fastapi
[CAUTION]: This project uses PyTorch based models, which rely on GPU computing with CUDA toolkit and Nvidia drivers. As some package installations are CUDA-specific, incompatible hardware and drivers may raise issues at install or import.
You may have to build your own python / torch environment.
Cuda drivers are also mentioned in the Docker image, you may want to remove this line from the Dockerfile:
RUN conda install -c conda-forge cudatoolkit=11.7
Python version: 3.10.14
cd /path/to/repository/root
python3 -m venv venv
# Virtual environment activation:
source venv/bin/activate
# Installation of required packages from the requirements file
pip install -r requirements.txt
conda create --name fastapi_nlp python=3.10.14
# Virtual environment activation
conda activate fastapi_nlp
# Installation of required packages from the requirements file
conda install --file requirements.txt
docker build -t nlp_api_image:latest
#TODO
chmod +x run.sh
- Add
.environment
folder to the.gitignore
file to stop exposing sensitive information to git commits - Access the
.environment
folder from repository root, the folder may be hidden.
nano .environment/env.dev
nano .environment/env.test
- Replace the dummy secret keys with your own keys. You can generate base64 secrets with this command:
openssl rand -base64 32
- Change the
superuser
password. Modify the fileapp/devtools.py
with the credentials of your choice:
# from app/devtools.py
def create_superuser(db: Session):
superuser = User(
username='superuser@example.com',
first_name='Super',
last_name='User',
hashed_password=pwd_context.hash('8888'),
...
Only an admin can register another admin. A permanent admin can be created once logged-in as the superuser
, by sending a post
request to this address:
http://localhost:8080/admin/create
superuser
argument initializes an admin at startup and deletes it at shutdown.
./run.sh dev superuser
superuser credentials (from app/dev_tools.py):
def create_superuser(db: Session):
superuser = User(
username='superuser@example.com',
first_name='Super',
last_name='User',
hashed_password=pwd_context.hash('8888'),
is_active=True,
role='admin',
has_access_sentiment=True,
has_access_emotion=True
)
db.add(superuser)
db.commit()
curl -X 'POST' \
'http://localhost:8080/auth/token' \
-H 'accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'grant_type=&username=superuser%40example.com&password=8888&scope=&client_id=&client_secret='
From http://localhost:8080/docs
:
-
Authenticate:
- Click on the lock
- Fill username (ex:
superuser@example.com
) - Fill password (ex
8888
)
-
Send input text
- Sentiment route: localhost:8080/mlservice/sentiment/predict/interpreted:
- Emotion route localhost:8080/mlservice/emotion/predict
-
Get output
To achieve separation of concerns and improved code organization, endpoints and helper functions are grouped into logical modules with their own namespaces (ex /admin/...
):
- auth router file for registration / security related helpers and endpoint functions (see below)
- admin router file for admin specific actions, like grant/revoke access rights, delete user
- user router file for user specific actions (change password, #TODO get service call history)
- bert_sentiment
- roberta_emotion
API endpoints documentation (see below), and HTTP request templates are available at this address:
http://localhost:8080/docs
This multilanguage pretrained model returns sentiment probabilities for 5 target labels (0 = very negative to 4 = very positive).
More information on the model here
This pretrained model returns emotion probabilities for a total of 28 different emotions, the service selects the 5 most significant labels, returns their name and probability
More information on the model here