You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The main project page has a list of key features, specifically interested about performance:
Performance - DJL serving running multithreading inference in a single JVM. Our benchmark shows DJL serving has higher throughput than most C++ model servers on the market.
I could not find any link to the results in the project , or the website https://docs.djl.ai/. Is it possible to add more information about the benchmark?
Will this change the current api? How?
No, this is mainly about documentation.
Who will benefit from this enhancement?
Users/Developers that are interested in choosing DJL vs other model serving options
References
N/A
The text was updated successfully, but these errors were encountered:
Description
The main project page has a list of key features, specifically interested about performance:
I could not find any link to the results in the project , or the website https://docs.djl.ai/. Is it possible to add more information about the benchmark?
Will this change the current api? How?
No, this is mainly about documentation.
Who will benefit from this enhancement?
Users/Developers that are interested in choosing DJL vs other model serving options
References
N/A
The text was updated successfully, but these errors were encountered: