-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create management cluster using controller behind feature flag #7110
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Skipping CI for Draft Pull Request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm gonna stop reviewing here, it seems like you still have a lot of cleaning to do so not sure if you meant for me to review this :)
@@ -971,7 +971,7 @@ func (p *vsphereProvider) createSecret(ctx context.Context, cluster *types.Clust | |||
} | |||
|
|||
func (p *vsphereProvider) PreCAPIInstallOnBootstrap(ctx context.Context, cluster *types.Cluster, clusterSpec *cluster.Spec) error { | |||
return nil | |||
return p.UpdateSecrets(ctx, cluster, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious
why wasn't this needed before but it is now? Nothing wrong with it, it makes sense, just want to understand the change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So with the controller workflow, we now have the vsphere datacenter controller and the eksa controller running in parallel, and we need the secret because we try to get and set the secrets at the datacenter controller level. In the previous CLI flow we set them after the workload cluster is created and since didn't need it as a pre CAPI install step on the bootstrap cluster. This is done in Nutanix in a similar way, and i think we will need it in other providers too.
|
||
func (s *createWorkloadClusterTask) Run(ctx context.Context, commandContext *task.CommandContext) task.Task { | ||
logger.Info("Creating new workload cluster") | ||
workloadCluster, err := commandContext.ClusterManager.GetWorkloadCluster(ctx, commandContext.BootstrapCluster, commandContext.ClusterSpec, commandContext.Provider) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i have my doubts about this method
first i would prefer if we don't implement it in cluster manager
that's a super overloaded entity and tbh, not a good abstraction (I am the one to be blamed there)
It can still be in the clustermanager package if that makes it a bit easier for you, but ideally it should not depend on the clustermanager object.
Second, I wonder if we can just group it with the applier to create a higher level abstraction. Like a ClusterCreator
or CreateCluster
(horrible name, feel free to choose a better one). This entity would take the cluster spec and the kubeconfig for the "management cluster" (bootstrap in this occasion) and return the types.Cluster
object for the newly created workload cluster. It can use the applier internally. It's a similar concept to what you can find here https://github.com/aws/eks-anywhere/blob/main/pkg/workflow/task/workload/create.go (minus the part that ties to the Task
interface. Alternatively you could have a CreateCluster
with two methods, Create
(or CreateSync
) and WriteKubeconfig
.
I believe this would also be reusable for the workload cluster flow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -1167,6 +1208,37 @@ func (c *ClusterManager) CreateEKSAResources(ctx context.Context, cluster *types | |||
return c.ApplyReleases(ctx, clusterSpec, cluster) | |||
} | |||
|
|||
// CreateEKSAReleaseBundle applies the eks-a release bundle to the cluster. | |||
func (c *ClusterManager) CreateEKSAReleaseBundle(ctx context.Context, cluster *types.Cluster, clusterSpec *cluster.Spec) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we really need the wrapper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up using this wrapper because while creating the cluster's namespace we use the cluster Client that is a private Retrier Client field used by the cluster manager. Moving the code would stop be from using the cluster client.
return &installEksaComponentsOnWorkloadTask{} | ||
} | ||
logger.Info("Moving cluster management from bootstrap to workload cluster") | ||
err := commandContext.ClusterManager.MoveCAPI(ctx, commandContext.BootstrapCluster, commandContext.WorkloadCluster, commandContext.WorkloadCluster.Name, commandContext.ClusterSpec, types.WithNodeRef()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to pause the eks-a reconciliation before we do this?
if not, you take the risk of the eks-a controller trying to create a new capi cluster
basically do the same thing that clusterctl does to move resources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add the pause reconcile annotation here. However, since the order of tasks is that we move CAPI first, and then install EKSA controller, the capi cluster will already exist by the time the controller starts reconciliation, so would it make a difference?
created an issue btw https://github.com/aws/eks-anywhere-internal/issues/2096
bf4f5c2
to
6c61a0c
Compare
6c61a0c
to
179d22a
Compare
f36cebc
to
15dbf1b
Compare
b64d4fb
to
03cec89
Compare
2a2fa01
to
2fe5f77
Compare
2fe5f77
to
a593aa0
Compare
Issue #, if available:
Description of changes:
Create management cluster using controller behind feature flag.
Testing (if applicable):
Documentation added/planned (if applicable):
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.