-
Notifications
You must be signed in to change notification settings - Fork 20
Home
WolframRhodium edited this page Jul 25, 2024
·
41 revisions
Welcome to the vs-mlrt wiki!
The goal of the project to provide highly-optimized AI inference runtime for VapourSynth.
- vs-ov: OpenVINO Pure CPU AI Inference Runtime
- vs-ort: ONNX Runtime based CPU/CUDA AI Inference Runtime
- vs-trt: TensorRT based CUDA AI Inference Runtime
The following models are available:
- waifu2x: anime super-resolution / upscaling / denoising
- DPIR: denoise/deblocking
- RealESRGANv2: anime super-resolution / upscaling
- Real-CUGAN: anime super-resolution / upscaling / denoising
- RIFE: video frame interpolation
- Runtimes
- Models
- Device-specific benchmarks
- NVIDIA GeForce RTX 4090
- NVIDIA GeForce RTX 3090
- NVIDIA GeForce RTX 2080 Ti
- NVIDIA Quadro P6000
- AMD Radeon RX 7900 XTX
- AMD Radeon Pro V620
- AMD Radeon Pro V520
- AMD Radeon VII
- AMD EPYC Zen4
- Intel Core Ultra 7 155H
- Intel Arc A380
- Intel Arc A770
- Intel Data Center GPU Flex 170
- Intel Data Center GPU Max 1100
- Intel Xeon Sapphire Rapids