Skip to content

Commit

Permalink
Refactor docs
Browse files Browse the repository at this point in the history
Signed-off-by: Nicolas Bigler <nicolas.bigler@vshn.ch>
  • Loading branch information
TheBigLee committed Nov 30, 2023
1 parent f893a69 commit d3a107f
Show file tree
Hide file tree
Showing 7 changed files with 59 additions and 131 deletions.
2 changes: 1 addition & 1 deletion component/class/defaults.yml
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ parameters:
- apiGroups:
- vshn.appcat.vshn.io
resources:
- vshnredis
- "*"
verbs:
- get
- apiGroups:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ rules:
- apiGroups:
- vshn.appcat.vshn.io
resources:
- vshnredis
- '*'
verbs:
- get
- apiGroups:
Expand Down
45 changes: 45 additions & 0 deletions docs/modules/ROOT/pages/runbooks/vshn-helm-debugging-partial.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
=== icon:bug[] Steps for debugging

Failed probes can have a multitude of reasons, but in general there are two different kinds of issue classes.
Either the instance itself is failing or provisioning or updating the instance failed.

In any case, you should first figure out where the effected instance runs.
The alert will provide you with three labels: `cluster_id`, `namespace`, and `name`.
Connect to the Kubernetes cluster with the provided `cluster_id` and get the effected claim.

[source,shell,subs="attributes"]
----
export NAMESPACE={{ namespace }}
export NAME={{ name }}
export COMPOSITE=$(kubectl -n $NAMESPACE get {service} $NAME -o jsonpath="{.spec.resourceRef.name}")
kubectl -n $NAMESPACE get {service} $NAME
----

If the claim is not `SYNCED` this might indicate that there is an issue with provisioning.
If it is synced there is most likely an issue with the instance itself, you can skip to the next subsection.

==== Debugging Provisioning

To figure out what went wrong with provisioning it usually helps to take a closer look at the composite.

[source,shell,subs="attributes"]
----
kubectl --as cluster-admin describe x{service} $COMPOSITE
----

If there are sync issues there usually are events that point to the root cause of the issue.

Furthermore, it can help to look at the `Object` resources that are created for this instance or the `releases.helm.crossplane.io` object associated with the instance.

[source,shell]
----
kubectl --as cluster-admin get object -l crossplane.io/composite=$COMPOSITE
kubectl --as cluster-admin get object $OBJECT_NAME
kubectl --as cluster-admin get releases.helm.crossplane.io -l crossplane.io/composite=$COMPOSITE
kubectl --as cluster-admin describe releases.helm.crossplane.io -l crossplane.io/composite=$COMPOSITE
----

If any of them are not synced, describing them should point you in the right direction.

Finally, it might also be helpful to look at the logs of various crossplane components in namespace `syn-crossplane`.
45 changes: 2 additions & 43 deletions docs/modules/ROOT/pages/runbooks/vshn-mariadb.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,49 +24,8 @@ A ticket alert should have a label `severity: warning`.
* A page alert means that the error rate is significantly higher than the objective.
Immediate action is necessary to not miss the objective.

=== icon:bug[] Steps for debugging

Failed probes can have a multitude of reasons, but in general there are two different kinds of issue clases.
Either the instance itself is failing or provisioning or updating the instance failed.

In any case, you should first figure out where the effected instance runs.
The alert will provide you with three labels: `cluster_id`, `namespace`, and `name`.

Connect to the Kubernetes cluster with the provided `cluster_id` and get the effected claim.

[source,shell]
----
export NAMESPACE={{ namespace }}
export NAME={{ name }}
export COMPOSITE=$(kubectl -n $NAMESPACE get vshnmariadb $NAME -o jsonpath="{.spec.resourceRef.name}")
kubectl -n $NAMESPACE get vshnmariadb $NAME
----

If the claim is not `SYNCED` this might indicate that there is an issue with provisioning.
If it is synced there is most likely an issue with the instance itself, you can skip to the next subsection.

==== Debugging Provisioning

To figure out what went wrong with provisioning it usually helps to take a closer look at the composite.

[source,shell]
----
kubectl --as cluster-admin describe xvshnmariadb $COMPOSITE
----

If there are sync issues there usually are events that point to the root cause of the issue.

Further it can help to look at the `Object` resources that are created for this instance.

[source,shell]
----
kubectl --as cluster-admin get object -l crossplane.io/composite=$COMPOSITE
----

If any of them are not synced, describing them should point you in the right direction.

Finally, it might also be helpful to look at the logs of various crossplane components in namespace `syn-crossplane`.
:service: vshnmariadb
include::vshn-helm-debugging-partial.adoc[]

==== Debugging MariaDB Instance

Expand Down
45 changes: 2 additions & 43 deletions docs/modules/ROOT/pages/runbooks/vshn-minio.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,49 +24,8 @@ A ticket alert should have a label `severity: warning`.
* A page alert means that the error rate is significantly higher than the objective.
Immediate action is necessary to not miss the objective.

=== icon:bug[] Steps for debugging

Failed probes can have a multitude of reasons, but in general there are two different kinds of issue clases.
Either the instance itself is failing or provisioning or updating the instance failed.

In any case, you should first figure out where the effected instance runs.
The alert will provide you with three labels: `cluster_id`, `namespace`, and `name`.

Connect to the Kubernetes cluster with the provided `cluster_id` and get the effected claim.

[source,shell]
----
export NAMESPACE={{ namespace }}
export NAME={{ name }}
export COMPOSITE=$(kubectl -n $NAMESPACE get vshnminio $NAME -o jsonpath="{.spec.resourceRef.name}")
kubectl -n $NAMESPACE get vshnminio $NAME
----

If the claim is not `SYNCED` this might indicate that there is an issue with provisioning.
If it is synced there is most likely an issue with the instance itself, you can skip to the next subsection.

==== Debugging Provisioning

To figure out what went wrong with provisioning it usually helps to take a closer look at the composite.

[source,shell]
----
kubectl --as cluster-admin describe xvshnminio $COMPOSITE
----

If there are sync issues there usually are events that point to the root cause of the issue.

Further it can help to look at the `Object` resources that are created for this instance.

[source,shell]
----
kubectl --as cluster-admin get object -l crossplane.io/composite=$COMPOSITE
----

If any of them are not synced, describing them should point you in the right direction.

Finally, it might also be helpful to look at the logs of various crossplane components in namespace `syn-crossplane`.
:service: vshnminio
include::vshn-helm-debugging-partial.adoc[]

==== Debugging MinIO Instance

Expand Down
45 changes: 2 additions & 43 deletions docs/modules/ROOT/pages/runbooks/vshn-redis.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,49 +24,8 @@ A ticket alert should have a label `severity: warning`.
* A page alert means that the error rate is significantly higher than the objective.
Immediate action is necessary to not miss the objective.

=== icon:bug[] Steps for debugging

Failed probes can have a multitude of reasons, but in general there are two different kinds of issue clases.
Either the instance itself is failing or provisioning or updating the instance failed.

In any case, you should first figure out where the effected instance runs.
The alert will provide you with three labels: `cluster_id`, `namespace`, and `name`.

Connect to the Kubernetes cluster with the provided `cluster_id` and get the effected claim.

[source,shell]
----
export NAMESPACE={{ namespace }}
export NAME={{ name }}
export COMPOSITE=$(kubectl -n $NAMESPACE get vshnredis $NAME -o jsonpath="{.spec.resourceRef.name}")
kubectl -n $NAMESPACE get vshnredis $NAME
----

If the claim is not `SYNCED` this might indicate that there is an issue with provisioning.
If it is synced there is most likely an issue with the instance itself, you can skip to the next subsection.

==== Debugging Provisioning

To figure out what went wrong with provisioning it usually helps to take a closer look at the composite.

[source,shell]
----
kubectl --as cluster-admin describe xvshnredis $COMPOSITE
----

If there are sync issues there usually are events that point to the root cause of the issue.

Further it can help to look at the `Object` resources that are created for this instance.

[source,shell]
----
kubectl --as cluster-admin get object -l crossplane.io/composite=$COMPOSITE
----

If any of them are not synced, describing them should point you in the right direction.

Finally, it might also be helpful to look at the logs of various crossplane components in namespace `syn-crossplane`.
:service: vshnminio
include::vshn-helm-debugging-partial.adoc[]

==== Debugging Redis Instance

Expand Down
6 changes: 6 additions & 0 deletions docs/modules/ROOT/partials/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,3 +34,9 @@
.Runbooks
* xref:runbooks/vshn-postgresql.adoc[]
* xref:runbooks/vshn-redis.adoc[]
* xref:runbooks/vshn-minio.adoc[]
* xref:runbooks/vshn-mariadb.adoc[]
* xref:runbooks/vshn-generic.adoc[]

0 comments on commit d3a107f

Please sign in to comment.