The purpose of the service is to find questions that are as similar as possible to the user's request.
Some requirements and thoughts are placed in approaches.md
Subproject for data downloading, EDA, embedding clustering, computing clusters centers.
To run data downloading you can follow commands:
cd clustering
python src/data/get_data.py --config_path params.yaml
To run embeddings computing run:
python src/data/create_embeddings.py --config_path params.yaml
To create ElasticSearch index with precomputed embeddings run (you need running ElasticSearch for this step):
python src/index/indexer_elastic.py --config_path params.yaml
You can specify configuration in params.yaml if you need
Service to create text embeddings via Tensorflow Universal Sentence Encoder
You can find the Swagger docs on http://localhost:5000/apidocs
Streamlit application to find nearest StackOverflow question.
The application needs ElasticSearch index to search by.
You can modify application params in search_app/params.yaml
if you need
You can run any services in the single mode, see reference
To run all with docker-compose run
- Specify ElasticSearch user credentials in
clustering/.env
and/es.env
files - Run all containers with
docker-compose up
- Create ElasticSearch index with command
python src/index/indexer_elastic.py --config_path params.yaml
Finally, your search_app service will be deployed on localhost:8501
There is load test written with Locust in the tests/locustfile.py
.
To run test follow these steps:
- install locust with
pip install locust
- go to
tests
folder - run locust web UI with command
locust
- open
http://localhost:8089/
and specify test params (Number of users, Spawn rate, Host with running search server) - start swarming
Also, you can run load tests without web UI, see Locust docs
Here you can see current load testing result with such hardware configuration:
- CPU: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz
- RAM: 16 GB
- System disk space: 20 GB