Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DISCUSSION] Introduce new CRD as ShardingSphereAutoScaler #433

Open
mlycore opened this issue Jul 11, 2023 · 2 comments
Open

[DISCUSSION] Introduce new CRD as ShardingSphereAutoScaler #433

mlycore opened this issue Jul 11, 2023 · 2 comments
Assignees
Milestone

Comments

@mlycore
Copy link
Contributor

mlycore commented Jul 11, 2023

Background

At present, the CRD of ShardingSphere Operator can help quickly deploy a complete set of ShardingSphere Proxy clusters with the help of ComputeNode and StorageNode, but it lacks the ability to automatically expand capacity. Therefore, corresponding solutions need to be proposed.

New CRD Autoscaler

Overview
image

Definition

The goal of AutoScaler is to complete the automatic scaling of the ShardingSphere cluster according to the set strategy, so it includes the following aspects:

Scaling Target Object
The object of scaling is the ShardingSphere cluster, including ComputeNode, StorageNode and the planned Cluster. Similar to HPA, AutoScaler does not belong to the original deployment structure, but is used as an enhanced strategy. Therefore, it is not configured as an inherent Spec attribute of ComputeNode or StorageNode, but is created separately on demand.

On the one hand, a single AutoScaler should only serve one ShardingSphere cluster and declare the only automatic scaling mechanism provider. In order to ensure flexibility, it is compatible with possible linkage scaling in the future. Different policies can be set for ComputeNode and StorageNode respectively. The strategy can be horizontal scaling or vertical expansion. Declarative target resource object and the corresponding selector in each policy, and set the corresponding policy configuration items.

On the other hand, considering that there may be an AutoScaler oriented to a group of ComputeNode and StorageNode in the early stage, it is not bound in the same way as the name and namespace, but explicitly specified in the policy. In order to prevent multiple AutoScalers from being bound to the same target resource, the update will be compared with the existing AutoScaler in the cluster. If there is an AutoScaler configured for the same resource, the update will be rejected.

Scaling Policy
The cluster-oriented automatic scaling can be obtained by combining the policy design for ComputeNode and StorageNode:

  • ComputeNode-oriented automatic scaling include HPA and VPA. Among them, custom metrics autoscaling and VPA need to install additional controllers

Scaling Providers
There may be more than one mechanism provider for AutoScaler. Kubernetes has built-in HPA, and can realize custom expansion based on Prometheus Adaptor. The community also contributed VPA mechanism and ShardingSphere's unique SLA-based automatic expansion controller. When creating an AutoScaler, if the default Operator self-implemented mechanism is used, no declaration is required, otherwise the declaration needs to be displayed for related availability checks.
image

In summary, a basic AutoScaler is defined as follows:

type AutoScaler struct {
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`
        Spec              AutoScalerSpec   `json:"spec,omitempty"`
        Status            AutoScalerStatus `json:"status,omitempty"`
}

AutoScalerSpec contains the definition of automatic scaling behavior, including providers, policy groups, etc. The main logic of AutoScaler Reconciler is to render the configurations of HPA and VPA respectively according to ScalingPolicy in AutoScaler.

type AutoScalerSpec struct {
         // PolicyGroup declares a set of scaling policies, including horizontal and vertical scaling
         // Allow ComputeNode to enable HPA and VPA at the same time, without guaranteeing the result
         // Haven't verified the configuration of HPA and VPA of StorageNode yet
        PolicyGroup []ScalingPolicy
}

type ScalingPolicy struct {
         // TargetSelector is used to select the auto-scaling target
         // Support native CrossVersionObjectReference and Selector
         // The first version plans to support CrossVersionObjectReference first        
         TargetSelector     *ObjectRefSelector `json:"targetSelector"`
        
         // Provider is the provider of the scaling mechanism, and the optional values are:
         // - Empty: default value, which means provided by ShardingSphere Operator
         // - KubernetesHPA: Indicates the use of Kubernetes native HPA
         // - KubernetesVPA: Indicates the use of Kubernetes community VPA
         // - Other: Indicates a controller using a third-party controller
        Provider    string `json:"provider,omitempty" yaml:"provider,omitempty"`
        
         // HorizontalScaling contains the necessary parameters for HPA scaling
         // Does not contain StorageNode related configuration
        Horizontal *HorizontalScaling `json:"horizontal,omitempty"`
        
         // VerticalScaling contains the necessary parameters for VPA scaling
         // Does not contain StorageNode related configuration
        Vertical *VerticalScaling   `json:"vertical,omitempty"`
}

type ObjectRefSelector struct {
        ObjectRef autoscalingv2.CrossVersionObjectReference `json:"objectRef,omitempty", yaml:"targetRefs,omitempty"`
        Selector  *metav1.LabelSelector `json:"selector,omitempty"`
}

// The following configuration items are basically the same as HPA configuration, 
// please refer to the corresponding documentation for descriptiontype 
HorizontalScaling struct {
        MaxReplicas    int32
        MinReplicas    int32
        ScaleUpRules   *autoscalingv2.HPAScalingRules
        ScaleDownRules *autoscalingv2.HPAScalingRules
        Metrics        []autoscalingv2.MetricSpec
}

// The following configuration items are basically the same as the VPA configuration, 
// please refer to the corresponding documentation for instructions
type VerticalScaling struct {
        UpdatePolicy    *vpav1.PodUpdatePolicy
        ResourcePolicy  *vpav1.PodResourcePolicy
        Recommenders    []vpav1.VerticalPodAutoscalerRecommenderSelector
}

In AutoScalerStatus, Condition mainly includes:

  • ScalingReady: True if the check finds that the scaling conditions are currently available (eg HPA has been created). If it is False, there is a problem with the execution base environment of AutoScaler.
type AutoScalerStatus struct {
        Conditions []AutoScalerCondition
}

type AutoScalerCondition struct {
        Type               AutoScalerConditionType `json:"type" protobuf:"bytes,1,name=type"`
        Status             v1.ConditionStatus      `json:"status" protobuf:"bytes,2,name=status"`
        LastTransitionTime metav1.Time             `json:"lastTransitionTime,omitempty" protobuf:"bytes,3,opt,name=lastTransitionTime"`
        Reason             string                  `json:"reason,omitempty" protobuf:"bytes,4,opt,name=reason"`
        Message            string                  `json:"message,omitempty" protobuf:"bytes,5,opt,name=message"`
}

type AutoScalerConditionType string

const (
    ScalingReady AutoScalerConditionType = "ScalingReady"
)

HPA for ComputeNode

If Kubernetes native HPA is used, the configuration example is as follows:

apiVersion: shardingsphere.apache.org/v1alpha1
kind: AutoScaler
metadata:
  name: foo
spec:
  policyGroup:
  - targetSelector:
      objectRef:
        apiVersion: shardingsphere.apache.org/v1alpha1
        kind: ComputeNode
        name: foo
    provider: KubernetesHPA
    horizontal:
      maxReplicas: 5
      minReplicas: 1
      scaleUpRules:
        stabilizationWindowSeconds: 10
      scaleDownRules:
        stabilizationWindowSeconds: 10
      metrics:
      - type: Resource
        resource:
          name: cpu
          target: 
            type: Utilization:
            averageUtilization: 50

If custom metrics are used, the configuration example is as follows:

apiVersion: shardingsphere.apache.org/v1alpha1
kind: AutoScaler
metadata:
  name: foo
spec:
  policyGroup:
  - targetSelector:
      objectRef:
        apiVersion: shardingsphere.apache.org/v1alpha1
        kind: ComputeNode
        name: foo
    provider: KubernetesHPA
    horizontal:
      maxReplicas: 5
      minReplicas: 1
      scaleUpRules:
        stabilizationWindowSeconds: 10
      scaleDownRules:
        stabilizationWindowSeconds: 10
      metrics:
      - type: Pods:
        pods:
          metric:
            name: proxy_execute_latency_millis_bucket
            target:
              type: AverageValue
              averageValue: 500m

VPA for ComputeNode

If VPA is used, the VPA controller needs to be deployed first. The following is an example:

apiVersion: shardingsphere.apache.org/v1alpha1
kind: AutoScaler
metadata:
  name: foo
spec:
  policyGroup:
  - targetSelector
      objectRef:
        apiVersion: shardingsphere.apache.org/v1alpha1
        kind: ComputeNode
        name: foo
    provider: KubernetesVPA
    vertical:
      updatePolicy:
        updateMode: Recreate
        minReplicas: 1
      resourcePolicy:
        containerPolicies:
        - containerName: shardingsphere-proxy
          minAllowed:
            cpu: 100m
            memory: 50Mi
          maxAllowed:
            cpu: 1
            memory: 2Gi
          controlledResources:
          - cpu
          - memory
          controlledValues: RequestsOnly

Detailed configuration items refer to: Kubernetes VPA

@mlycore
Copy link
Contributor Author

mlycore commented Mar 23, 2024

Consider rename this CRD to SPA(aka. ShardingSphere Proxy AutoScaler).

@mlycore mlycore self-assigned this Mar 23, 2024
@mlycore mlycore changed the title [DISCUSSION] Introduce new CRD as AutoScaler [DISCUSSION] Introduce new CRD as ShardingSphereAutoScaler Jul 6, 2024
@mlycore
Copy link
Contributor Author

mlycore commented Jul 6, 2024

More discussion and thoughts will be submitted in #489 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant