SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
-
Updated
Nov 22, 2024 - Python
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
Must read research papers and links to tools and datasets that are related to using machine learning for compilers and systems optimisation
Kernel Tuner
Machine Learning Framework for Operating Systems - Brings ML to Linux kernel
Stretching GPU performance for GEMMs and tensor contractions.
CLTune: An automatic OpenCL & CUDA kernel tuner
Alchemy Cat —— 🔥Config System for SOTA
Phoebe
Benchmark scripts for TVM
ebpf profiler for jvm
Collective Knowledge crowd-tuning extension to let users crowdsource their experiments (using portable Collective Knowledge workflows) such as performance benchmarking, auto tuning and machine learning across diverse platforms with Linux, Windows, MacOS and Android provided by volunteers. Demo of DNN crowd-benchmarking and crowd-tuning:
K2vTune (A Workload-aware Configuration Tuning for RocksDB)
A Generic Distributed Auto-Tuning Infrastructure
A GPU benchmark suite for autotuners
Backoff uses an exponential backoff algorithm to backoff between retries with optional auto-tuning functionality.
Autotuner for Spark applications
This software package accompanies the paper "A Methodology for Comparing Auto-Tuning Optimization Algorithms" (https://doi.org/10.1016/j.future.2024.05.021), making the guidelines in the methodology easy to apply.
Add a description, image, and links to the auto-tuning topic page so that developers can more easily learn about it.
To associate your repository with the auto-tuning topic, visit your repo's landing page and select "manage topics."