-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump go 1.22.2->1.22.5 and add E2E tests #639
Changes from 7 commits
13ad396
9fea054
dbe8e76
29c2503
1cafd93
2c7858a
af61fa0
28ebff1
6615f6a
810059a
de25d5b
4e00652
a7ca28e
801565d
bfea27c
2796ac4
51313fd
9b4df3c
36781ff
1152f81
a2c28db
4204eff
bca2467
3d97cbe
1923987
8aae4aa
84c06a0
0e365ee
bc020f4
55f34f6
2ffb6d5
ba1fba3
8f39a84
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you add a short README.md to the e2e directory that goes over what maint_test.go sets up (RBAC and service) and what each specific test is expected to set up? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. On it! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done, could you take a look? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. LGTM! |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,128 @@ | ||
package e2e | ||
|
||
import ( | ||
"context" | ||
"flag" | ||
"fmt" | ||
"log" | ||
"net/http" | ||
"os" | ||
"testing" | ||
|
||
io_prometheus_client "github.com/prometheus/client_model/go" | ||
"github.com/prometheus/common/expfmt" | ||
"k8s.io/client-go/kubernetes/scheme" | ||
"sigs.k8s.io/e2e-framework/pkg/env" | ||
"sigs.k8s.io/e2e-framework/pkg/envconf" | ||
"sigs.k8s.io/e2e-framework/pkg/envfuncs" | ||
"sigs.k8s.io/e2e-framework/support/kind" | ||
) | ||
|
||
var ( | ||
testenv env.Environment | ||
agentImage = flag.String("agent-image", "", "The proxy agent's docker image.") | ||
serverImage = flag.String("server-image", "", "The proxy server's docker image.") | ||
) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. not a blocking change but why not put it together in a config struct ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How do you mean? I did it this way so it could be passed in easily from the command line (e.g. by the |
||
|
||
func TestMain(m *testing.M) { | ||
flag.Parse() | ||
if *agentImage == "" { | ||
log.Fatalf("must provide agent image with -agent-image") | ||
} | ||
if *serverImage == "" { | ||
log.Fatalf("must provide server image with -server-image") | ||
} | ||
|
||
scheme.AddToScheme(scheme.Scheme) | ||
|
||
testenv = env.New() | ||
kindClusterName := "kind-test" | ||
kindCluster := kind.NewCluster(kindClusterName) | ||
|
||
testenv.Setup( | ||
envfuncs.CreateCluster(kindCluster, kindClusterName), | ||
envfuncs.LoadImageToCluster(kindClusterName, *agentImage), | ||
envfuncs.LoadImageToCluster(kindClusterName, *serverImage), | ||
func(ctx context.Context, cfg *envconf.Config) (context.Context, error) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. thoughts on making this a named There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done! |
||
client := cfg.Client() | ||
|
||
// Render agent RBAC and Service templates. | ||
agentServiceAccount, _, err := renderAgentTemplate("serviceaccount.yaml", struct{}{}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we should reuse kind templates in examples directory. That way we won't have to update templates at two different places. Also it would always mean that if e2e passes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point. In that case, where should the logic for rendering the templates go? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see your point, we add a script in the examples/kind directory that renders and creates the cluster for folks to try out from the documentation. but in e2e test, we take the template as is and render and apply in code. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Exactly. Do you still want me to merge the two of them, or is it ok to keep them separate? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we can do that as a separate PR if this is causing blockers. Preference is to use the examples/kind. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Separate PR would be best as this blocks #635. I'll open an issue. |
||
if err != nil { | ||
return nil, err | ||
} | ||
agentClusterRole, _, err := renderAgentTemplate("clusterrole.yaml", struct{}{}) | ||
if err != nil { | ||
return nil, err | ||
} | ||
agentClusterRoleBinding, _, err := renderAgentTemplate("clusterrolebinding.yaml", struct{}{}) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
agentService, _, err := renderAgentTemplate("service.yaml", struct{}{}) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
|
||
// Submit agent RBAC templates to k8s. | ||
err = client.Resources().Create(ctx, agentServiceAccount) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
err = client.Resources().Create(ctx, agentClusterRole) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
err = client.Resources().Create(ctx, agentClusterRoleBinding) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
err = client.Resources().Create(ctx, agentService) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
|
||
// Render server RBAC and Service templates. | ||
serverClusterRoleBinding, _, err := renderServerTemplate("clusterrolebinding.yaml", struct{}{}) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
serverService, _, err := renderServerTemplate("service.yaml", struct{}{}) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
|
||
// Submit server templates to k8s. | ||
err = client.Resources().Create(ctx, serverClusterRoleBinding) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
err = client.Resources().Create(ctx, serverService) | ||
if err != nil { | ||
return ctx, err | ||
} | ||
|
||
return ctx, nil | ||
}, | ||
) | ||
|
||
testenv.Finish(envfuncs.DestroyCluster(kindClusterName)) | ||
|
||
os.Exit(testenv.Run(m)) | ||
} | ||
|
||
func getMetrics(url string) (map[string]*io_prometheus_client.MetricFamily, error) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. thoughts on having this return a friendlier type? Current usage always fetches a gauge so we could have this be There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Created a |
||
resp, err := http.Get(url) | ||
if err != nil { | ||
return nil, fmt.Errorf("could not get metrics: %w", err) | ||
} | ||
|
||
metricsParser := &expfmt.TextParser{} | ||
metricsFamilies, err := metricsParser.TextToMetricFamilies(resp.Body) | ||
defer resp.Body.Close() | ||
if err != nil { | ||
return nil, fmt.Errorf("could not parse metrics: %w", err) | ||
} | ||
|
||
return metricsFamilies, nil | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,163 @@ | ||
package e2e | ||
|
||
import ( | ||
"context" | ||
"fmt" | ||
"strconv" | ||
"testing" | ||
"time" | ||
|
||
corev1 "k8s.io/api/core/v1" | ||
"sigs.k8s.io/e2e-framework/klient/k8s/resources" | ||
"sigs.k8s.io/e2e-framework/klient/wait" | ||
"sigs.k8s.io/e2e-framework/klient/wait/conditions" | ||
"sigs.k8s.io/e2e-framework/pkg/envconf" | ||
"sigs.k8s.io/e2e-framework/pkg/features" | ||
) | ||
|
||
func TestMultiServer_MultiAgent_StaticCount(t *testing.T) { | ||
serverServiceHost := "konnectivity-server.kube-system.svc.cluster.local" | ||
agentServiceHost := "konnectivity-agent.kube-system.svc.cluster.local" | ||
adminPort := 8093 | ||
replicas := 3 | ||
|
||
serverStatefulSetCfg := StatefulSetConfig{ | ||
Replicas: 3, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done! |
||
Image: *serverImage, | ||
Args: []KeyValue{ | ||
{Key: "log-file", Value: "/var/log/konnectivity-server.log"}, | ||
{Key: "logtostderr", Value: "true"}, | ||
{Key: "log-file-max-size", Value: "0"}, | ||
{Key: "uds-name", Value: "/etc/kubernetes/konnectivity-server/konnectivity-server.socket"}, | ||
{Key: "delete-existing-uds-file"}, | ||
{Key: "cluster-cert", Value: "/etc/kubernetes/pki/apiserver.crt"}, | ||
{Key: "cluster-key", Value: "/etc/kubernetes/pki/apiserver.key"}, | ||
{Key: "server-port", Value: "8090"}, | ||
{Key: "agent-port", Value: "8091"}, | ||
{Key: "health-port", Value: "8092"}, | ||
{Key: "admin-port", Value: strconv.Itoa(adminPort)}, | ||
{Key: "keepalive-time", Value: "1h"}, | ||
{Key: "mode", Value: "grpc"}, | ||
{Key: "agent-namespace", Value: "kube-system"}, | ||
{Key: "agent-service-account", Value: "konnectivity-agent"}, | ||
{Key: "kubeconfig", Value: "/etc/kubernetes/admin.conf"}, | ||
{Key: "authentication-audience", Value: "system:konnectivity-server"}, | ||
{Key: "server-count", Value: "1"}, | ||
}, | ||
} | ||
serverStatefulSet, _, err := renderServerTemplate("statefulset.yaml", serverStatefulSetCfg) | ||
if err != nil { | ||
t.Fatalf("could not render server deployment: %v", err) | ||
} | ||
|
||
agentStatefulSetConfig := StatefulSetConfig{ | ||
Replicas: 3, | ||
Image: *agentImage, | ||
Args: []KeyValue{ | ||
{Key: "logtostderr", Value: "true"}, | ||
{Key: "ca-cert", Value: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"}, | ||
{Key: "proxy-server-host", Value: serverServiceHost}, | ||
{Key: "proxy-server-port", Value: "8091"}, | ||
{Key: "sync-interval", Value: "1s"}, | ||
{Key: "sync-interval-cap", Value: "10s"}, | ||
{Key: "sync-forever"}, | ||
{Key: "probe-interval", Value: "1s"}, | ||
{Key: "service-account-token-path", Value: "/var/run/secrets/tokens/konnectivity-agent-token"}, | ||
{Key: "server-count", Value: "3"}, | ||
}, | ||
} | ||
agentStatefulSet, _, err := renderAgentTemplate("statefulset.yaml", agentStatefulSetConfig) | ||
if err != nil { | ||
t.Fatalf("could not render agent deployment: %v", err) | ||
} | ||
|
||
feature := features.New("konnectivity server and agent stateful set with single replica for each") | ||
feature.Setup(func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context { | ||
client := cfg.Client() | ||
err := client.Resources().Create(ctx, serverStatefulSet) | ||
if err != nil { | ||
t.Fatalf("could not create server deployment: %v", err) | ||
} | ||
|
||
err = client.Resources().Create(ctx, agentStatefulSet) | ||
if err != nil { | ||
t.Fatalf("could not create agent deployment: %v", err) | ||
} | ||
|
||
err = wait.For( | ||
conditions.New(client.Resources()).DeploymentAvailable(agentStatefulSet.GetName(), agentStatefulSet.GetNamespace()), | ||
wait.WithTimeout(1*time.Minute), | ||
wait.WithInterval(10*time.Second), | ||
) | ||
if err != nil { | ||
t.Fatalf("waiting for agent deployment failed: %v", err) | ||
} | ||
|
||
err = wait.For( | ||
conditions.New(client.Resources()).DeploymentAvailable(serverStatefulSet.GetName(), serverStatefulSet.GetNamespace()), | ||
wait.WithTimeout(1*time.Minute), | ||
wait.WithInterval(10*time.Second), | ||
) | ||
if err != nil { | ||
t.Fatalf("waiting for server deployment failed: %v", err) | ||
} | ||
|
||
return ctx | ||
}) | ||
feature.Assess("all servers connected to all clients", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context { | ||
client := cfg.Client() | ||
|
||
var serverPods *corev1.PodList | ||
err := client.Resources().List(ctx, serverPods, resources.WithLabelSelector("k8s-app=konnectivity-server")) | ||
if err != nil { | ||
t.Fatalf("couldn't get server pods: %v", err) | ||
} | ||
|
||
for _, serverPod := range serverPods.Items { | ||
|
||
metricsFamilies, err := getMetrics(fmt.Sprintf("%v-%v:%v/metrics", serverPod.Name, serverServiceHost, adminPort)) | ||
if err != nil { | ||
t.Fatalf("couldn't get server metrics for pod %v", serverPod.Name) | ||
} | ||
connectionsMetric, exists := metricsFamilies["konnectivity_network_proxy_server_ready_backend_connections"] | ||
if !exists { | ||
t.Fatalf("couldn't find number of ready backend connections in metrics") | ||
} | ||
|
||
numConnections := int(connectionsMetric.GetMetric()[0].GetGauge().GetValue()) | ||
if numConnections != replicas { | ||
t.Errorf("incorrect number of connected agents (want: %v, got: %v)", replicas, numConnections) | ||
} | ||
} | ||
|
||
return ctx | ||
}) | ||
feature.Assess("all agents connected to all servers", func(ctx context.Context, t *testing.T, cfg *envconf.Config) context.Context { | ||
client := cfg.Client() | ||
|
||
var agentPods *corev1.PodList | ||
err := client.Resources().List(ctx, agentPods, resources.WithLabelSelector("k8s-app=konnectivity-agent")) | ||
if err != nil { | ||
t.Fatalf("couldn't get agent pods: %v", err) | ||
} | ||
|
||
for _, agentPod := range agentPods.Items { | ||
|
||
metricsFamilies, err := getMetrics(fmt.Sprintf("%v-%v:%v/metrics", agentPod.Name, agentServiceHost, adminPort)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. thought on pulling this into a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done! |
||
if err != nil { | ||
t.Fatalf("couldn't get agent metrics for pod %v", agentPod.Name) | ||
} | ||
connectionsMetric, exists := metricsFamilies["konnectivity_network_proxy_agent_open_server_connections"] | ||
if !exists { | ||
t.Fatalf("couldn't find number of ready server connections in metrics") | ||
} | ||
|
||
numConnections := int(connectionsMetric.GetMetric()[0].GetGauge().GetValue()) | ||
if numConnections != replicas { | ||
t.Errorf("incorrect number of connected servers (want: %v, got: %v)", replicas, numConnections) | ||
} | ||
} | ||
|
||
return ctx | ||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a reason not to run with -race enabled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, just didn't know about it. Added the
-race
flag.