You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
The key has expired.
New features
Add ir_optim, use_mkl(only for cpu version)argument
Support custom DAG for prediction service
HTTP service supports prediction with batch
HTTP service supports startup by uwsgi
Support model file monitoring, remote pull and hot loading
Support ABTest
Add image preprocessing, Chinese word segmentation preprocessing, Chinese sentiment analysis preprocessing module, and graphics segmentation postprocessing, image detection postprocessing module in paddle-serving-app
Add pre-trained model and sample code acquisition in paddle-serving-app, integrated profile function
Release Centos6 docker images for compile Paddle Serving
Optimized the time consumption of input and output memory copy in numpy.array format. When the client-side single concurrent batch size is 1 in the resnet50 imagenet classification task, qps is 100.38% higher than the 0.2.0 version.
Compatibility optimization
The client side removes the dependency on patchelf
Released paddle-serving-client for python27, python36, and python37
Server and client can be deployed in Centos6/7 and Ubuntu16/18 environments