Automatically remove cloud and kubernetes resources based on a time to leave tag, ttl.
Protect resources from deletion with a protection tag, do_not_delete.
NOTE: this project is used in Qovery's production environment
Check out our Blog announcement of Pleco: https://www.qovery.com/blog/announcement-of-pleco-the-open-source-kubernetes-and-cloud-services-garbage-collector
- Kubernetes
- Namespaces
- AWS
- Document DB databases
- Document DB subnet groups
- Elasticache databases
- Elasticache subnet groups
- RDS databases
- RDS subnet groups
- RDS parameter groups
- RDS snapshots
- EBS volumes
- ELB load balancers
- EC2 Key pairs
- ECR repositories
- EKS clusters
- IAM groups
- IAM users
- IAM policies
- IAM roles
- IAM OpenId Connect provider
- Cloudwatch logs
- KMS keys
- VPC vpcs
- VPC internet gateways
- VPC nat gateways
- VPC Elastic IP
- VPC route tables
- VPC subnets
- VPC security groups
- S3 buckets
- Lambda Functions
- SQS Queues
- Step Functions
- EC2 instances
- SCALEWAY
- Kubernetes clusters
- Database instances
- Load balancers
- Detached volumes
- S3 Buckets
- Unused Security Groups
- Orphan IPs
- VPCs
- Private networks
- DIGITAL OCEAN
- Kubernetes clusters
- Database instances
- Load balancers
- Detached volumes
- S3 Buckets
- Droplet firewalls
- Unused VPCs
- GCP
- Cloud Storage Buckets
- Artifact Registry Repositories
- Kubernetes clusters
- Cloud run jobs
- Networks // via JSON tags in resource description because resource has no support for tags
- Routers // via JSON tags in resource description because resource has no support for tags
- Service accounts // via JSON tags in resource description because resource has no support for tags
- AZURE
You can find a helm chart here, a docker image here and all binaries are on github.
In order to make pleco check and clean expired resources you need to set the following environment variables:
$ export AWS_ACCESS_KEY_ID=<access_key>
$ export AWS_SECRET_ACCESS_KEY=<secret_key>
$ export SCW_ACCESS_KEY=<access_key>
$ export SCW_SECRET_KEY=<secret_key>
$ export SCW_VOLUME_TIMEOUT=<delay_before_detached_volume_deletion_in_hours_since_last_update> # default is 2 hours
$ export DO_API_TOKEN=<your_do_api_token>
$ export DO_SPACES_KEY=<your_do_api_key_for_spaces>
$ export DO_SPACES_SECRET=<your_do_api_secret_for_spaces>
$ export DO_VOLUME_TIMEOUT=<delay_before_detached_volume_deletion_in_hours_since_creation> # default is 2 hours
$ export GOOGLE_APPLICATION_CREDENTIALS=<path_to_your_credentials_json_file>
A pleco command has the following structure:
pleco start <cloud_provider> [options]
You can set the connection mode with:
--kube-conn, -k <connection mode>
Default is "in"
You can set the debug level with:
--level <log level>
Default is "info"
You can set the interval between two pleco's check with:
--check-interval, -i <time in seconds>
Default is "120"
If you disable dry run, pleco will delete expired resources. If not it will only tells you how many resources are expired.
You can disable dry-run with:
--disable-dry-run, -y
Default is "false"
When pleco's look for expired resources, it will do it by aws region.
You can set region(s) with:
--aws-regions, -a <region(s)>
For example:
-a eu-west-3,us-east-2
When pleco is running you have to specify which resources expiration will be checked.
Here are some of the resources you can check:
--enable-eks, -e # Enable EKS watch
--enable-iam, -u # Enable IAM watch (groups, policies, roles, users)
pleco start aws --level debug -i 240 -a eu-west-3 -e -r -m -c -l -b -p -s -w -n -u -z -o -f -x -q -y
When pleco's look for expired resources, it will do it by scaleway zone.
You can set zone(s) with:
--scw-zones, -a <zone(s)>
For example:
-a fr-par-1
When pleco is running you have to specify which resources expiration will be checked.
Here are some of the resources you can check:
--enable-cluster, -e # Enable cluster watch
pleco start scaleway --level debug -i 240 -a fr-par-1 -e -r -o -l -b -s -p -y
When pleco's look for expired resources, it will do it by digital ocean region.
You can set zone(s) with:
--do-regions, -a <region(s)>
For example:
-a nyc3
When pleco is running you have to specify which resources expiration will be checked.
Here are some of the resources you can check:
--enable-cluster, -e # Enable cluster watch
pleco start do --level debug -i 240 -a nyc3 -e -r -s -l -b -f -v -y
When pleco's look for expired resources, it will do it by gcp_regions.
You can set zone(s) with:
--gcp-regions, -a <region(s)>
For example:
-a europe-west9
When pleco is running you have to specify which resources expiration will be checked.
Here are some of the resources you can check:
--enable-cluster # Enable cluster watch
--enable-object-storage # Enable object storage watch
--enable-artifact-registry # Enable artifact registry watch
--enable-network # Enable network watch
--enable-router # Enable router watch
--enable-iam # Enable IAM watch (service accounts)
pleco start
gcp
--level
debug
-i
240
--enable-object-storage
--enable-artifact-registry
--enable-cluster
--enable-network
--enable-router
--enable-iam
--enable-job
--gcp-regions
europe-west9
--disable-dry-run