Skip to content

Latest commit

 

History

History
21 lines (11 loc) · 1.06 KB

README.md

File metadata and controls

21 lines (11 loc) · 1.06 KB

LLM Inference Speeds

This repository contains benchmark data for various Large Language Models (LLM) based on their inference speeds measured in tokens per second. The benchmarks are performed across different hardware configurations using the prompt "Give me 1 line phrase".

About the Data

The data represents the performance of several LLMs, detailing the tokens processed per second on specific hardware setups. Each entry includes the model name, the hardware used, and the measured speed.

Explore the Benchmarks

You can view and interact with the benchmark data through a searchable table on our GitHub Pages site. Use the search field to filter by model name and explore different hardware performances.

View the Inference Speeds Table

Contributing

Contributions to the benchmark data are welcome! Please refer to the contributing guidelines for more information on how you can contribute.

License

This project is licensed under the MIT License - see the LICENSE file for details.