Skip to content
View patrickdung's full-sized avatar

Block or report patrickdung

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Stars

ai

11 repositories

Get up and running with Llama 3.3, Phi 4, Gemma 2, and other large language models.

Go 106,919 8,559 Updated Jan 11, 2025

LLM inference in C/C++

C++ 70,516 10,185 Updated Jan 11, 2025

Distribute and run LLMs with a single file.

C++ 21,175 1,090 Updated Jan 5, 2025

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains

TypeScript 21,219 1,941 Updated Jan 11, 2025

Home of StarCoder2!

Python 1,824 164 Updated Mar 21, 2024

Self-hosted AI coding assistant

Rust 22,530 1,060 Updated Jan 11, 2025

Use Codestral Mamba with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.

Python 30 2 Updated Jul 18, 2024

Artificial Intelligence Infrastructure-as-Code Generator.

Go 3,568 277 Updated Oct 29, 2024

The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.

TypeScript 3,233 177 Updated Dec 18, 2024

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU su…

Python 6,897 1,282 Updated Jan 10, 2025

A high-throughput and memory-efficient inference and serving engine for LLMs

Python 33,527 5,125 Updated Jan 11, 2025