Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark results or link to key features section #1234

Open
segundovolante opened this issue Oct 28, 2023 · 0 comments
Open

Add benchmark results or link to key features section #1234

segundovolante opened this issue Oct 28, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@segundovolante
Copy link

Description

The main project page has a list of key features, specifically interested about performance:

  • Performance - DJL serving running multithreading inference in a single JVM. Our benchmark shows DJL serving has higher throughput than most C++ model servers on the market.

I could not find any link to the results in the project , or the website https://docs.djl.ai/. Is it possible to add more information about the benchmark?

Will this change the current api? How?
No, this is mainly about documentation.

Who will benefit from this enhancement?
Users/Developers that are interested in choosing DJL vs other model serving options

References

N/A

@segundovolante segundovolante added the enhancement New feature or request label Oct 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant