- 👋 Hi, I’m Konrad. I work at Habana (Intel) and make high-performance AI software.
- I'm currently building vLLM-fork for Gaudi. I like it a lot.
- That's my cat in the avatar.
gotta go fast
- Gdańsk, Poland
-
19:03
(UTC +01:00) - in/kzawora
Pinned Loading
-
HabanaAI/vllm-fork
HabanaAI/vllm-fork PublicForked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
-
-
vllm-project/vllm
vllm-project/vllm PublicA high-throughput and memory-efficient inference and serving engine for LLMs
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.