ElasticAI KubeWatch is an AI-driven auto-scaling solution using Azure Functions and Azure Kubernetes Service (AKS) to dynamically adjust resources based on real-time application performance metrics. 📈
- AI-powered auto-scaling of AKS clusters based on real-time performance metrics.
- Easily configurable scaling rules to handle varying workloads efficiently.
- Integrates with Azure Functions to enable dynamic resource adjustments.
- Includes machine learning model for demand forecasting.
To get started with ElasticAI KubeWatch, follow these steps:
- Clone the repository:
https://github.com/AnthonyByansi/ElasticAI_KubeWatch.git
- Install the required dependencies:
pip install -r requirements.txt
- Configure the settings in
config/config.yaml
andconfig/scaling_rules.yaml
. - Deploy the application to AKS using the provided scripts:
./scripts/deploy_to_aks.sh
- Monitor the auto-scaling behavior through Azure Functions and AKS dashboard.
For detailed information on the architecture, deployment, and usage of ElasticAI KubeWatch, check out the Documentation folder:
- Architecture: Overview of the solution's design and components.
- Deployment: Step-by-step guide on deploying the application to AKS.
- User Guide: Instructions on configuring and using the auto-scaling solution.
The ElasticAI KubeWatch solution can be deployed to Azure Kubernetes Service (AKS) using the provided deployment script:
./scripts/deploy_to_aks.sh
Make sure you have the necessary permissions and the AKS cluster is properly set up before running the script.
ElasticAI KubeWatch provides configuration options through YAML files in the config directory:
config.yaml
: General settings for the application.scaling_rules.yaml
: Rules for auto-scaling based on performance metrics.
Contributions to ElasticAI KubeWatch are welcome! To contribute, please follow our Contribution Guidelines.
This project is licensed under the MIT License.