forked from arthurhenrique/cookiecutter-fastapi
-
Notifications
You must be signed in to change notification settings - Fork 0
Quickstart
Vinícius Nunes da Costa edited this page Oct 22, 2024
·
1 revision
- Install Cookiecutter:
pip install cookiecutter
- Generate the project:
cookiecutter gh:viniciusnvcosta/ai-dat
- Choose the template branch:
To select a branch, choose either main (default), text, or vision.
cookiecutter gh:viniciusnvcosta/ai-dat --checkout main
- Navigate to the project directory:
cd your_project_name
- Install Poetry:
pip install poetry
- Build the virtual environment and install the dependencies:
Use the Makefile commands
make install
- Edit prediction.py to include your variables:
class MachineLearningDataInput(BaseModel):
URL: str
body: Dict[str, float]
def get_np_array(self):
return np.array([
[
self.body.get("feature1"),
self.body.get("feature2"),
]
])
- Edit predictor.py to import and use your prediction services:
from services.predict import MyCustomModelHandler as model
def get_prediction(data_point):
return model.predict(data_point, load_wrapper=joblib.load, method="predict")
- Run the server
make run
- Send the example.json file to the /predict route and check the results:
curl -X POST "http://127.0.0.1:8080/api/v1/predict" -H "accept: application/json" -H "Content-Type: application/json" -d @example.json
You may also access "http://127.0.0.1:8080/docs" and use the swagger UI provided by FastAPI to parse the
example.json
.
And that's it! You've successfully set up your new API.
Explore the repo and repurpose any preprocessing, postprocessing, and "runner" modules to enhance your inference task.