kaldi-asr/kaldi is the official location of the Kaldi project.
-
Updated
Oct 4, 2024 - Shell
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
kaldi-asr/kaldi is the official location of the Kaldi project.
A high-throughput and memory-efficient inference and serving engine for LLMs
Open3D: A Modern Library for 3D Data Processing
Build and run Docker containers leveraging NVIDIA GPUs
Instant neural graphics primitives: lightning fast NeRF and more
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
A flexible framework of neural networks for deep learning
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
Tengine is a lite, high performance, modular inference engine for embedded device
CUDA Templates for Linear Algebra Subroutines
Go package for computer vision using OpenCV 4 and beyond. Includes support for DNN, CUDA, OpenCV Contrib, and OpenVINO.
[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
computer vision projects | 计算机视觉相关好玩的AI项目(Python、C++、embedded system)
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
Created by Nvidia
Released June 23, 2007