Skip to content

Quickstart

Vinícius Nunes da Costa edited this page Oct 22, 2024 · 1 revision

Step 1: Download and Build the Repository using Cookiecutter

  1. Install Cookiecutter:
pip install cookiecutter
  1. Generate the project:

cookiecutter gh:viniciusnvcosta/ai-dat

  1. Choose the template branch:

To select a branch, choose either main (default), text, or vision.

cookiecutter gh:viniciusnvcosta/ai-dat --checkout main

Step 2: Build a Virtual Environment and Install Packages

  1. Navigate to the project directory:
cd your_project_name
  1. Install Poetry:
pip install poetry
  1. Build the virtual environment and install the dependencies:

Use the Makefile commands

make install

Step 3: Modify Prediction Models

  1. Edit prediction.py to include your variables:
class MachineLearningDataInput(BaseModel):
    URL: str
    body: Dict[str, float]

    def get_np_array(self):
        return np.array([
            [
                self.body.get("feature1"),
                self.body.get("feature2"),
            ]
        ])
  1. Edit predictor.py to import and use your prediction services:
from services.predict import MyCustomModelHandler as model

def get_prediction(data_point):
    return model.predict(data_point, load_wrapper=joblib.load, method="predict")

Step 5: Run the Uvicorn Server

  1. Run the server

make run

  1. Send the example.json file to the /predict route and check the results:
curl -X POST "http://127.0.0.1:8080/api/v1/predict" -H "accept: application/json" -H "Content-Type: application/json" -d @example.json

You may also access "http://127.0.0.1:8080/docs" and use the swagger UI provided by FastAPI to parse the example.json.

And that's it! You've successfully set up your new API.

Next steps:

Explore the repo and repurpose any preprocessing, postprocessing, and "runner" modules to enhance your inference task.