Automated installation script for vLLM on HPC systems with ROCm support. This installer specifically targets AMD MI300X GPUs and similar architectures in HPC environments.
- Automated installation of vLLM and all dependencies
- ROCm support with PyTorch nightly builds
- Flash Attention integration
- Anaconda environment management
- Comprehensive logging and error handling
- HPC module management
- Support for AMD MI300X GPUs (customizable for other architectures)
- Access to an HPC system with ROCm support
- AMD GPU (default configuration for MI300X)
- Git
- Anaconda/Miniconda
- Module environment system
- Clone the repository:
git clone https://github.com/AI-DarwinLabs/vllm-hpc-installer.git
cd vllm-hpc-installer
- Make the script executable:
chmod +x install.sh
- Run the installer:
./install.sh
You can customize the installation by editing config/default_config.sh
:
PYTHON_VERSION
: Python version (default: 3.11)PYTORCH_ROCM_VERSION
: ROCm version for PyTorch (default: 6.2)GPU_ARCH
: GPU architecture (default: gfx942 for MI300X)CONDA_ENV_NAME
: Name of the conda environment (default: vllm)
vllm-hpc-installer/
βββ install.sh # Main installation script
βββ config/ # Configuration files
βββ modules/ # Installation modules
βββ docs/ # Documentation
For issues and feature requests, please use the GitHub Issues page.
Contributions are welcome! Please read our Contributing Guidelines before submitting pull requests.