Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump go 1.22.2->1.22.5 and add E2E tests #639

Merged
merged 33 commits into from
Aug 5, 2024
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
13ad396
E2E tests
carreter Jul 8, 2024
9fea054
Fix e2e tests
carreter Jul 8, 2024
dbe8e76
Bump go version and add e2e test runner
carreter Jul 3, 2024
29c2503
Vendor
carreter Jul 8, 2024
1cafd93
go mod tidy to vendor more dependencies
carreter Jul 3, 2024
2c7858a
Improve e2e tests
carreter Jul 8, 2024
af61fa0
Add e2e test for multiple servers and multiple agents
carreter Jul 8, 2024
28ebff1
Review comments
carreter Jul 17, 2024
6615f6a
Add kind-e2e workflow
carreter Jul 17, 2024
810059a
Exclude e2e tests from fast-test and test runners
carreter Jul 18, 2024
de25d5b
More stringent matching of e2e tests
carreter Jul 19, 2024
4e00652
Properly escape $ in makefile
carreter Jul 19, 2024
a7ca28e
Fix typo
carreter Jul 19, 2024
801565d
Fix empty REGISTRY variable
carreter Jul 19, 2024
bfea27c
Make kind-e2e run on different k8s versions
carreter Jul 19, 2024
2796ac4
Make kind-e2e workflow only build images once
carreter Jul 19, 2024
51313fd
Load prebuilt konnectivity images
carreter Jul 19, 2024
9b4df3c
Will this work? who knows
carreter Jul 19, 2024
36781ff
Here we go again!
carreter Jul 19, 2024
1152f81
Add e2e testing for http-connect mode
carreter Jul 22, 2024
a2c28db
Don't rebuild docker images on every kind-e2e run
carreter Jul 23, 2024
4204eff
Typo
carreter Jul 23, 2024
bca2467
Bump go from 1.22.3 to 1.22.5
carreter Jul 24, 2024
3d97cbe
Review nit
carreter Jul 24, 2024
1923987
Bump go toolchain to 1.22.5
carreter Jul 24, 2024
8aae4aa
Revert go toolchain to 1.22.4
carreter Jul 24, 2024
84c06a0
e2e refactor and add lease testing
carreter Jul 26, 2024
0e365ee
Switch out StatefulSet for Deployment
carreter Jul 29, 2024
bc020f4
Add README with info on tests
carreter Jul 29, 2024
55f34f6
Fix kind capitalization
carreter Jul 29, 2024
2ffb6d5
Fix konnectivity-client version
carreter Jul 29, 2024
ba1fba3
Restore toolchain directive
carreter Jul 29, 2024
8f39a84
Review comments
carreter Aug 5, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ ALL_ARCH ?= amd64 arm arm64 ppc64le s390x
# The output type could either be docker (local), or registry.
OUTPUT_TYPE ?= docker
GO_TOOLCHAIN ?= golang
GO_VERSION ?= 1.22.2
GO_VERSION ?= 1.22.3
BASEIMAGE ?= gcr.io/distroless/static-debian11:nonroot

ifeq ($(GOPATH),)
Expand Down Expand Up @@ -80,6 +80,10 @@ test:
test-integration: build
go test -mod=vendor -race ./tests -agent-path $(PWD)/bin/proxy-agent

.PHONY: test-e2e
test-e2e: docker-build
go test -mod=vendor ./e2e -agent-image ${AGENT_FULL_IMAGE}-$(TARGETARCH):${TAG} -server-image ${SERVER_FULL_IMAGE}-$(TARGETARCH):${TAG}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason not to run with -race enabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, just didn't know about it. Added the -race flag.


## --------------------------------------
## Binaries
## --------------------------------------
Expand Down
128 changes: 128 additions & 0 deletions e2e/main_test.go
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a short README.md to the e2e directory that goes over what maint_test.go sets up (RBAC and service) and what each specific test is expected to set up?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On it!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, could you take a look?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
package e2e

import (
"context"
"flag"
"fmt"
"log"
"net/http"
"os"
"testing"

io_prometheus_client "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
"k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/e2e-framework/pkg/env"
"sigs.k8s.io/e2e-framework/pkg/envconf"
"sigs.k8s.io/e2e-framework/pkg/envfuncs"
"sigs.k8s.io/e2e-framework/support/kind"
)

var (
testenv env.Environment
agentImage = flag.String("agent-image", "", "The proxy agent's docker image.")
serverImage = flag.String("server-image", "", "The proxy server's docker image.")
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a blocking change but why not put it together in a config struct ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you mean? I did it this way so it could be passed in easily from the command line (e.g. by the Makefile).


func TestMain(m *testing.M) {
flag.Parse()
if *agentImage == "" {
log.Fatalf("must provide agent image with -agent-image")
}
if *serverImage == "" {
log.Fatalf("must provide server image with -server-image")
}

scheme.AddToScheme(scheme.Scheme)

testenv = env.New()
kindClusterName := "kind-test"
kindCluster := kind.NewCluster(kindClusterName)

testenv.Setup(
envfuncs.CreateCluster(kindCluster, kindClusterName),
envfuncs.LoadImageToCluster(kindClusterName, *agentImage),
envfuncs.LoadImageToCluster(kindClusterName, *serverImage),
func(ctx context.Context, cfg *envconf.Config) (context.Context, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thoughts on making this a named renderAndApplyManifests() func?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

client := cfg.Client()

// Render agent RBAC and Service templates.
agentServiceAccount, _, err := renderAgentTemplate("serviceaccount.yaml", struct{}{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should reuse kind templates in examples directory. That way we won't have to update templates at two different places. Also it would always mean that if e2e passes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. In that case, where should the logic for rendering the templates go?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point, we add a script in the examples/kind directory that renders and creates the cluster for folks to try out from the documentation.

but in e2e test, we take the template as is and render and apply in code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly. Do you still want me to merge the two of them, or is it ok to keep them separate?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can do that as a separate PR if this is causing blockers. Preference is to use the examples/kind.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separate PR would be best as this blocks #635. I'll open an issue.

if err != nil {
return nil, err
}
agentClusterRole, _, err := renderAgentTemplate("clusterrole.yaml", struct{}{})
if err != nil {
return nil, err
}
agentClusterRoleBinding, _, err := renderAgentTemplate("clusterrolebinding.yaml", struct{}{})
if err != nil {
return ctx, err
}
agentService, _, err := renderAgentTemplate("service.yaml", struct{}{})
if err != nil {
return ctx, err
}

// Submit agent RBAC templates to k8s.
err = client.Resources().Create(ctx, agentServiceAccount)
if err != nil {
return ctx, err
}
err = client.Resources().Create(ctx, agentClusterRole)
if err != nil {
return ctx, err
}
err = client.Resources().Create(ctx, agentClusterRoleBinding)
if err != nil {
return ctx, err
}
err = client.Resources().Create(ctx, agentService)
if err != nil {
return ctx, err
}

// Render server RBAC and Service templates.
serverClusterRoleBinding, _, err := renderServerTemplate("clusterrolebinding.yaml", struct{}{})
if err != nil {
return ctx, err
}
serverService, _, err := renderServerTemplate("service.yaml", struct{}{})
if err != nil {
return ctx, err
}

// Submit server templates to k8s.
err = client.Resources().Create(ctx, serverClusterRoleBinding)
if err != nil {
return ctx, err
}
err = client.Resources().Create(ctx, serverService)
if err != nil {
return ctx, err
}

return ctx, nil
},
)

testenv.Finish(envfuncs.DestroyCluster(kindClusterName))

os.Exit(testenv.Run(m))
}

func getMetrics(url string) (map[string]*io_prometheus_client.MetricFamily, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thoughts on having this return a friendlier type? Current usage always fetches a gauge so we could have this be getMetrics(url, name) (int, error). That would keep the int(connectionsMetric.GetMetric()[0].GetGauge().GetValue()) complexity all in this func

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Created a getMetricsGaugeValue() func

resp, err := http.Get(url)
if err != nil {
return nil, fmt.Errorf("could not get metrics: %w", err)
}

metricsParser := &expfmt.TextParser{}
metricsFamilies, err := metricsParser.TextToMetricFamilies(resp.Body)
defer resp.Body.Close()
if err != nil {
return nil, fmt.Errorf("could not parse metrics: %w", err)
}

return metricsFamilies, nil
}
163 changes: 163 additions & 0 deletions e2e/multiserver_multiagent_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
package e2e

import (
"context"
"fmt"
"strconv"
"testing"
"time"

corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/e2e-framework/klient/k8s/resources"
"sigs.k8s.io/e2e-framework/klient/wait"
"sigs.k8s.io/e2e-framework/klient/wait/conditions"
"sigs.k8s.io/e2e-framework/pkg/envconf"
"sigs.k8s.io/e2e-framework/pkg/features"
)

func TestMultiServer_MultiAgent_StaticCount(t *testing.T) {
serverServiceHost := "konnectivity-server.kube-system.svc.cluster.local"
agentServiceHost := "konnectivity-agent.kube-system.svc.cluster.local"
adminPort := 8093
replicas := 3

serverStatefulSetCfg := StatefulSetConfig{
Replicas: 3,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use replicas instead of 3 here and below in the agent config

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Image: *serverImage,
Args: []KeyValue{
{Key: "log-file", Value: "/var/log/konnectivity-server.log"},
{Key: "logtostderr", Value: "true"},
{Key: "log-file-max-size", Value: "0"},
{Key: "uds-name", Value: "/etc/kubernetes/konnectivity-server/konnectivity-server.socket"},
{Key: "delete-existing-uds-file"},
{Key: "cluster-cert", Value: "/etc/kubernetes/pki/apiserver.crt"},
{Key: "cluster-key", Value: "/etc/kubernetes/pki/apiserver.key"},
{Key: "server-port", Value: "8090"},
{Key: "agent-port", Value: "8091"},
{Key: "health-port", Value: "8092"},
{Key: "admin-port", Value: strconv.Itoa(adminPort)},
{Key: "keepalive-time", Value: "1h"},
{Key: "mode", Value: "grpc"},
{Key: "agent-namespace", Value: "kube-system"},
{Key: "agent-service-account", Value: "konnectivity-agent"},
{Key: "kubeconfig", Value: "/etc/kubernetes/admin.conf"},
{Key: "authentication-audience", Value: "system:konnectivity-server"},
{Key: "server-count", Value: "1"},
},
}
serverStatefulSet, _, err := renderServerTemplate("statefulset.yaml", serverStatefulSetCfg)
if err != nil {
t.Fatalf("could not render server deployment: %v", err)
}

agentStatefulSetConfig := StatefulSetConfig{
Replicas: 3,
Image: *agentImage,
Args: []KeyValue{
{Key: "logtostderr", Value: "true"},
{Key: "ca-cert", Value: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"},
{Key: "proxy-server-host", Value: serverServiceHost},
{Key: "proxy-server-port", Value: "8091"},
{Key: "sync-interval", Value: "1s"},
{Key: "sync-interval-cap", Value: "10s"},
{Key: "sync-forever"},
{Key: "probe-interval", Value: "1s"},
{Key: "service-account-token-path", Value: "/var/run/secrets/tokens/konnectivity-agent-token"},
{Key: "server-count", Value: "3"},
},
}
agentStatefulSet, _, err := renderAgentTemplate("statefulset.yaml", agentStatefulSetConfig)
if err != nil {
t.Fatalf("could not render agent deployment: %v", err)
}

feature := features.New("konnectivity server and agent stateful set with single replica for each")
feature.Setup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
client := cfg.Client()
err := client.Resources().Create(ctx, serverStatefulSet)
if err != nil {
t.Fatalf("could not create server deployment: %v", err)
}

err = client.Resources().Create(ctx, agentStatefulSet)
if err != nil {
t.Fatalf("could not create agent deployment: %v", err)
}

err = wait.For(
conditions.New(client.Resources()).DeploymentAvailable(agentStatefulSet.GetName(), agentStatefulSet.GetNamespace()),
wait.WithTimeout(1*time.Minute),
wait.WithInterval(10*time.Second),
)
if err != nil {
t.Fatalf("waiting for agent deployment failed: %v", err)
}

err = wait.For(
conditions.New(client.Resources()).DeploymentAvailable(serverStatefulSet.GetName(), serverStatefulSet.GetNamespace()),
wait.WithTimeout(1*time.Minute),
wait.WithInterval(10*time.Second),
)
if err != nil {
t.Fatalf("waiting for server deployment failed: %v", err)
}

return ctx
})
feature.Assess("all servers connected to all clients", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
client := cfg.Client()

var serverPods *corev1.PodList
err := client.Resources().List(ctx, serverPods, resources.WithLabelSelector("k8s-app=konnectivity-server"))
if err != nil {
t.Fatalf("couldn't get server pods: %v", err)
}

for _, serverPod := range serverPods.Items {

metricsFamilies, err := getMetrics(fmt.Sprintf("%v-%v:%v/metrics", serverPod.Name, serverServiceHost, adminPort))
if err != nil {
t.Fatalf("couldn't get server metrics for pod %v", serverPod.Name)
}
connectionsMetric, exists := metricsFamilies["konnectivity_network_proxy_server_ready_backend_connections"]
if !exists {
t.Fatalf("couldn't find number of ready backend connections in metrics")
}

numConnections := int(connectionsMetric.GetMetric()[0].GetGauge().GetValue())
if numConnections != replicas {
t.Errorf("incorrect number of connected agents (want: %v, got: %v)", replicas, numConnections)
}
}

return ctx
})
feature.Assess("all agents connected to all servers", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context {
client := cfg.Client()

var agentPods *corev1.PodList
err := client.Resources().List(ctx, agentPods, resources.WithLabelSelector("k8s-app=konnectivity-agent"))
if err != nil {
t.Fatalf("couldn't get agent pods: %v", err)
}

for _, agentPod := range agentPods.Items {

metricsFamilies, err := getMetrics(fmt.Sprintf("%v-%v:%v/metrics", agentPod.Name, agentServiceHost, adminPort))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought on pulling this into a agentsAreConnected() func? That way we could reuse in singleserver_singleagent_test as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

if err != nil {
t.Fatalf("couldn't get agent metrics for pod %v", agentPod.Name)
}
connectionsMetric, exists := metricsFamilies["konnectivity_network_proxy_agent_open_server_connections"]
if !exists {
t.Fatalf("couldn't find number of ready server connections in metrics")
}

numConnections := int(connectionsMetric.GetMetric()[0].GetGauge().GetValue())
if numConnections != replicas {
t.Errorf("incorrect number of connected servers (want: %v, got: %v)", replicas, numConnections)
}
}

return ctx
})
}
Loading