A Ruby client for Kubernetes REST api. The client supports GET, POST, PUT, DELETE on all the entities available in kubernetes in both the core and group apis. The client currently supports Kubernetes REST api version v1. To learn more about groups and versions in kubernetes refer to k8s docs
If you use Kubeclient::Config
, all gem versions <= v4.9.3 can return incorrect ssl_options[:verify_ssl]
,
allowing MITM attacks on your connection and thereby stealing your cluster credentials.
See ManageIQ#554 for details and which versions got a fix.
Add this line to your application's Gemfile:
gem 'kubeclient'
And then execute:
bundle
Or install it yourself as:
gem install kubeclient
Initialize the client:
client = Kubeclient::Client.new('http://localhost:8080/api', 'v1')
For A Group Api:
client = Kubeclient::Client.new('http://localhost:8080/apis/batch', 'v1')
Another option is to initialize the client with URI object:
uri = URI::HTTP.build(host: "somehostname", port: 8080)
client = Kubeclient::Client.new(uri, 'v1')
It is also possible to use https and configure ssl with:
ssl_options = {
client_cert: OpenSSL::X509::Certificate.new(File.read('/path/to/client.crt')),
client_key: OpenSSL::PKey::RSA.new(File.read('/path/to/client.key')),
ca_file: '/path/to/ca.crt',
verify_ssl: OpenSSL::SSL::VERIFY_PEER
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', ssl_options: ssl_options
)
As an alternative to the ca_file
it's possible to use the cert_store
:
cert_store = OpenSSL::X509::Store.new
cert_store.add_cert(OpenSSL::X509::Certificate.new(ca_cert_data))
ssl_options = {
cert_store: cert_store,
verify_ssl: OpenSSL::SSL::VERIFY_PEER
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', ssl_options: ssl_options
)
For testing and development purpose you can disable the ssl check with:
ssl_options = { verify_ssl: OpenSSL::SSL::VERIFY_NONE }
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', ssl_options: ssl_options
)
If you are using basic authentication or bearer tokens as described here then you can specify one of the following:
auth_options = {
username: 'username',
password: 'password'
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', auth_options: auth_options
)
or (fixed token, if it expires it's up to you to create a new Client
object):
auth_options = {
bearer_token: 'MDExMWJkMjItOWY1Ny00OGM5LWJlNDEtMjBiMzgxODkxYzYz'
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', auth_options: auth_options
)
or (will automatically re-read the token if file is updated):
auth_options = {
bearer_token_file: '/path/to/token_file'
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', auth_options: auth_options
)
The Faraday HTTP client used by kubeclient allows configuration of custom middlewares. For example, you may want to add instrumentation, or add custom retry options. Kubeclient provides a hook for injecting these custom middlewares into the request/response chain, via Client#configure_faraday(&block)
. This provides an access point to the Faraday::Connection
object during initialization. As an example, the following code adds retry options to client requests:
client = Kubeclient::Client.new('http://localhost:8080/api', 'v1')
retry_options = { max: 2, interval: 0.05, interval_randomness: 0.5, backoff_factor: 2 }
client.configure_faraday { |conn| conn.request(:retry, retry_options) }
For more examples and documentation, consult the Faraday docs.
Note that certain middlewares (e.g. :raise_error
) are always included since Kubeclient is dependent on their behaviour to function correctly.
The recommended way to locate the API server within the pod is with the kubernetes.default.svc
DNS name, which resolves to a Service IP which in turn will be routed to an API server.
The recommended way to authenticate to the API server is with a service account. kube-system associates a pod with a service account and a bearer token for that service account is placed into the filesystem tree of each container in that pod at /var/run/secrets/kubernetes.io/serviceaccount/token
.
If available, a certificate bundle is placed into the filesystem tree of each container at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
, and should be used to verify the serving certificate of the API server.
For example:
auth_options = {
bearer_token_file: '/var/run/secrets/kubernetes.io/serviceaccount/token'
}
ssl_options = {}
if File.exist?("/var/run/secrets/kubernetes.io/serviceaccount/ca.crt")
ssl_options[:ca_file] = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
end
client = Kubeclient::Client.new(
'https://kubernetes.default.svc',
'v1',
auth_options: auth_options,
ssl_options: ssl_options
)
Finally, the default namespace to be used for namespaced API operations is placed in a file at /var/run/secrets/kubernetes.io/serviceaccount/namespace
in each container. It is recommended that you use this namespace when issuing API commands below.
namespace = File.read('/var/run/secrets/kubernetes.io/serviceaccount/namespace')
You can find information about tokens in this guide and in this reference.
You can also use kubeclient with non-blocking sockets such as Celluloid::IO, see here for details. For example:
require 'celluloid/io'
socket_options = {
socket_class: Celluloid::IO::TCPSocket,
ssl_socket_class: Celluloid::IO::SSLSocket
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', socket_options: socket_options
)
This affects only .watch_*
sockets, not one-off actions like .get_*
, .delete_*
etc.
You can also use kubeclient with an http proxy server such as tinyproxy. It can be entered as a string or a URI object. For example:
proxy_uri = URI::HTTP.build(host: "myproxyhost", port: 8443)
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', http_proxy_uri: proxy_uri
)
You can optionally not allow redirection with kubeclient. For example:
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', http_max_redirects: 0
)
Watching configures the socket to never time out by default (however, sooner or later all watches terminate).
One-off actions like .get_*
, .delete_*
have a configurable timeout:
timeouts = {
open: 10, # unit is seconds
read: nil # nil means never time out
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', timeouts: timeouts
)
Default timeouts match Net::HTTP
and RestClient
:
- open is 60 seconds
- read is 60 seconds.
If you want ruby-independent behavior, always specify :open
.
If an error occurs while performing a request to the API server, Kubeclient::HttpError
will be raised.
There are also certain case-specific errors that can be raised (which also inherit from Kubeclient::HttpError
):
Kubeclient::ResourceNotFoundError
when attempting to fetch a resource that does not exist or any other situation that results in the API server returning a 404.Kubeclient::ResourceAlreadyExistsError
when attempting to create a resource that already exists.
Discovery from the kube-apiserver is done lazily on method calls so it would not change behavior.
It can also be done explicitly:
client = Kubeclient::Client.new('http://localhost:8080/api', 'v1')
client.discover
It is possible to check the status of discovery
unless client.discovered
client.discover
end
If you've been using kubectl
and have a .kube/config
file (possibly referencing other files in fields such as client-certificate
), you can auto-populate a config object using Kubeclient::Config
:
# assuming $KUBECONFIG is one file, won't merge multiple like kubectl
config = Kubeclient::Config.read(ENV['KUBECONFIG'] || '/path/to/.kube/config')
This will lookup external files; relative paths will be resolved relative to the file's directory, if config refers to them with relative path.
This includes external exec:
credential plugins to be executed.
You can also construct Config
directly from nested data. For example if you have JSON or YAML config data in a variable:
config = Kubeclient::Config.new(YAML.safe_load(yaml_text), nil)
# or
config = Kubeclient::Config.new(JSON.parse(json_text), nil)
The 2nd argument is a base directory for finding external files, if config refers to them with relative path.
Setting it to nil
disables file lookups, and exec:
execution - such configs will raise an exception. (A config can be self-contained by using inline fields such as client-certificate-data
.)
To create a client based on a Config object:
# default context according to `current-context` field:
context = config.context
# or to use a specific context, by name:
context = config.context('default/192-168-99-100:8443/system:admin')
Kubeclient::Client.new(
context.api_endpoint,
'v1',
ssl_options: context.ssl_options,
auth_options: context.auth_options
)
On Amazon EKS by default the authentication method is IAM. When running kubectl a temporary token is generated by shelling out to
the aws-iam-authenticator binary which is sent to authenticate the user.
See aws-iam-authenticator.
To replicate that functionality, the Kubeclient::AmazonEksCredentials
class can accept a set of IAM credentials and
contains a helper method to generate the authentication token for you.
This requires a set of gems which are not included in
kubeclient
dependencies (aws-sigv4
) so you should add them to your bundle.
You will also require either the aws-sdk
v2 or aws-sdk-core
v3 gems to generate the required Aws:Credentials
object to pass to this method.
To obtain a token:
require 'aws-sdk-core'
# Use keys
credentials = Aws::Credentials.new(access_key, secret_key)
# Or a profile
credentials = Aws::SharedCredentials.new(profile_name: 'default').credentials
# Or for an STS Assumed Role Credentials or any other credential Provider other than Static Credentials
credentials = Aws::AssumeRoleCredentials.new({ client: sts_client, role_arn: role_arn, role_session_name: session_name })
# Kubeclient Auth Options
auth_options = {
bearer_token: Kubeclient::AmazonEksCredentials.token(credentials, eks_cluster_name)
}
client = Kubeclient::Client.new(
eks_cluster_https_endpoint, 'v1', auth_options: auth_options
)
Note that this returns a token good for one minute. If your code requires authorization for longer than that, you should plan to acquire a new one, see How to manually renew section.
If kubeconfig file has user: {auth-provider: {name: gcp, cmd-path: ..., cmd-args: ..., token-key: ...}}
, the command will be executed to obtain a token.
(Normally this would be a gcloud config config-helper
command.)
Note that this returns an expiring token. If your code requires authorization for a long time, you should plan to acquire a new one, see How to manually renew section.
On Google Compute Engine, Google App Engine, or Google Cloud Functions, as well as gcloud
-configured systems
with application default credentials,
kubeclient can use googleauth
gem to authorize.
This requires the googleauth
gem that is not included in
kubeclient
dependencies so you should add it to your bundle.
If you use Config.context(...).auth_options
and the kubeconfig file has user: {auth-provider: {name: gcp}}
, but does not contain cmd-path
key, kubeclient will automatically try this (raising LoadError if you don't have googleauth
in your bundle).
Or you can obtain a token manually:
require 'googleauth'
auth_options = {
bearer_token: Kubeclient::GoogleApplicationDefaultCredentials.token
}
client = Kubeclient::Client.new(
'https://localhost:8443/api', 'v1', auth_options: auth_options
)
Note that this returns a token good for one hour. If your code requires authorization for longer than that, you should plan to acquire a new one, see How to manually renew section.
If the cluster you are using has OIDC authentication enabled you can use the openid_connect
gem to obtain
id-tokens if the one in your kubeconfig has expired.
This requires the openid_connect
gem which is not included in
the kubeclient
dependencies so should be added to your own applications bundle.
The OIDC Auth Provider will not perform the initial setup of your $KUBECONFIG
file. You will need to use something
like dexter
in order to configure the auth-provider in your $KUBECONFIG
file.
If you use Config.context(...).auth_options
and the $KUBECONFIG
file has user: {auth-provider: {name: oidc}}
,
kubeclient will automatically obtain a token (or use id-token
if still valid)
Tokens are typically short-lived (e.g. 1 hour) and the expiration time is determined by the OIDC Provider (e.g. Google). If your code requires authentication for longer than that you should obtain a new token periodically, see How to manually renew section.
Note: id-tokens retrieved via this provider are not written back to the $KUBECONFIG
file as they would be when
using kubectl
.
Kubeclient does not yet help with this.
The division of labor between Config
and Context
objects may change, for now please make no assumptions at which stage exec:
and auth-provider:
are handled and whether they're cached.
The currently guaranteed way to renew is create a new Config
object.
The more painful part is that you'll then need to create new Client
object(s) with the credentials from new config.
So repeat all of this:
config = Kubeclient::Config.read(ENV['KUBECONFIG'] || '/path/to/.kube/config')
context = config.context
ssl_options = context.ssl_options
auth_options = context.auth_options
client = Kubeclient::Client.new(
context.api_endpoint, 'v1',
ssl_options: ssl_options, auth_options: auth_options
)
# and additional Clients if needed...
Config.read
is catastrophically unsafe — it will execute arbitrary command lines specified by the config!
Config.new(data, nil)
is better but Kubeclient was never reviewed for behaving safely with malicious / malformed config.
It might crash / misbehave in unexpected ways...
Additionally, the config.context
object will contain a namespace
attribute, if it was defined in the file.
It is recommended that you use this namespace when issuing API commands below.
This is the same behavior that is implemented by kubectl
command.
You can read it as follows:
puts config.context.namespace
Impersonation is supported when loading a kubectl config and via the Ruby API, for example:
client = Kubeclient::Client.new(
context.api_endpoint, 'v1',
auth_options: {
as: "admin",
as_groups: ["system:masters"],
as_uid: "123", # optional
as_user_extra: {
"reason" => ["admin access"]
}
}
)
Note that only one group and one value per each extra field are currently supported. Using list of multiple values
will result in ArgumentError
.
We try to support the last 3 minor versions, matching the official support policy for Kubernetes. Kubernetes 1.2 and below have known issues and are unsupported. Kubernetes 1.3 presumed to still work although nobody is really testing on such old versions...
Summary of main CRUD actions:
get_foos(namespace: 'namespace', **opts) # namespaced collection
get_foos(**opts) # all namespaces or global collection
get_foo('name', 'namespace', opts) # namespaced
get_foo('name', nil, opts) # global
watch_foos(namespace: ns, **opts) # namespaced collection
watch_foos(**opts) # all namespaces or global collection
watch_foos(namespace: ns, name: 'name', **opts) # namespaced single object
watch_foos(name: 'name', **opts) # global single object
delete_foo('name', 'namespace', opts) # namespaced
delete_foo('name', nil, opts) # global
delete_foos(namespace: 'ns', **opts) # namespaced
create_foo(Kubeclient::Resource.new({metadata: {name: 'name', namespace: 'namespace', ...}, ...}))
create_foo(Kubeclient::Resource.new({metadata: {name: 'name', ...}, ...})) # global
update_foo(Kubeclient::Resource.new({metadata: {name: 'name', namespace: 'namespace', ...}, ...}))
update_foo(Kubeclient::Resource.new({metadata: {name: 'name', ...}, ...})) # global
patch_foo('name', patch, 'namespace') # namespaced
patch_foo('name', patch) # global
apply_foo(Kubeclient::Resource.new({metadata: {name: 'name', namespace: 'namespace', ...}, ...}), field_manager: 'myapp', **opts)
apply_foo(Kubeclient::Resource.new({metadata: {name: 'name', ...}, ...}), field_manager: 'myapp', **opts) # global
These grew to be quite inconsistent 😖, see ManageIQ#312 and ManageIQ#332 for improvement plans.
Such as: get_pods
, get_secrets
, get_services
, get_nodes
, get_replication_controllers
, get_resource_quotas
, get_limit_ranges
, get_persistent_volumes
, get_persistent_volume_claims
, get_component_statuses
, get_service_accounts
pods = client.get_pods
Get all entities of a specific type in a namespace:
services = client.get_services(namespace: 'development')
You can get entities which have specific labels by specifying a parameter named label_selector
(named labelSelector
in Kubernetes server):
pods = client.get_pods(label_selector: 'name=redis-master')
You can specify multiple labels (that option will return entities which have both labels:
pods = client.get_pods(label_selector: 'name=redis-master,app=redis')
There is also a ability to filter by some fields. Which fields are supported is not documented, you can try and see if you get an error...
client.get_pods(field_selector: 'spec.nodeName=master-0')
You can ask for entities at a specific version by specifying a parameter named resource_version
:
pods = client.get_pods(resource_version: '0')
but it's not guaranteed you'll get it. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions to understand the semantics.
With default (as: :ros
) return format, the returned object acts like an array of the individual pods, but also supports a .resourceVersion
method.
With :parsed
and :parsed_symbolized
formats, the returned data structure matches kubernetes list structure: it's a hash containing metadata
and items
keys, the latter containing the individual pods.
continue = nil
loop do
entities = client.get_pods(limit: 1_000, continue: continue)
continue = entities.continue
break if entities.last?
end
See https://kubernetes.io/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks for more information.
The continue tokens expire after a short amount of time, so similar to a watch if you don't request a subsequent page within aprox. 5 minutes of the previous page being returned the server will return a 410 Gone
error and the client must request the list from the start (i.e. omit the continue token for the next call).
Support for chunking was added in v1.9 so previous versions will ignore the option and return the full collection.
Such as: get_service "service name"
, get_pod "pod name"
, get_replication_controller "rc name"
, get_secret "secret name"
, get_resource_quota "resource quota name"
, get_limit_range "limit range name"
, get_persistent_volume "persistent volume name"
, get_persistent_volume_claim "persistent volume claim name"
, get_component_status "component name"
, get_service_account "service account name"
The GET request should include the namespace name, except for nodes and namespaces entities.
node = client.get_node "127.0.0.1"
service = client.get_service "guestbook", 'development'
Note - Kubernetes doesn't work with the uid, but rather with the 'name' property. Querying with uid causes 404.
To avoid overhead from parsing and building RecursiveOpenStruct
objects for each reply, pass the as: :raw
option when initializing Kubeclient::Client
or when calling get_
/ watch_
methods.
The result can then be printed, or searched with a regex, or parsed via JSON.parse(r)
.
client = Kubeclient::Client.new(url, version, as: :raw)
or
pods = client.get_pods as: :raw
node = client.get_node "127.0.0.1", as: :raw
Other formats are:
:ros
(default) forRecursiveOpenStruct
:parsed
forJSON.parse
:parsed_symbolized
forJSON.parse(..., symbolize_names: true)
- a class of your choice (this will instantiate a new instance of that class with the raw value of the response body)
See https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes for an overview.
It is possible to receive live update notices watching the relevant entities:
client.watch_pods do |notice|
# process notice data
end
The notices have .type
field which may be 'ADDED'
, 'MODIFIED'
, 'DELETED'
, or currently 'ERROR'
, and an .object
field containing the object. UPCOMING CHANGE: In next major version, we plan to raise exceptions instead of passing on ERROR into the block.
For namespaced entities, the default watches across all namespaces, and you can specify client.watch_secrets(namespace: 'foo')
to only watch in a single namespace.
You can narrow down using label_selector:
and field_selector:
params, like with get_pods
methods.
You can also watch a single object by specifying name:
e.g. client.watch_nodes(name: 'gandalf')
(not namespaced so a name is enough) or client.watch_pods(namespace: 'foo', name: 'bar')
(namespaced, need both params).
Note the method name is still plural! There is no watch_pod
, only watch_pods
. The yielded "type" remains the same — watch notices, it's just they'll always refer to the same object.
You can use as:
param to control the format of the yielded notices.
You can use allow_watch_bookmarks: true
to get BOOKMARK
events returned regularly, see watch bookmarks
client.watch_pods allow_watch_bookmarks: true do |notice|
# process notice data
end
To limit the maximum duration of a watch on the server, pass the timeout_seconds:
param.
client.watch_pods(timeout_seconds: 120, namespace: ns) do |notice|
...
end
While nominally the watch block looks like an infinite loop, that's unrealistic. Network connections eventually get severed, and kubernetes apiserver is known to terminate watches.
Unfortunately, this sometimes raises an exception and sometimes the loop just exits. UPCOMING CHANGE: In next major version, non-deliberate termination will always raise an exception; the block will only exit silenty if stopped deliberately.
You can use break
or return
inside the watch block.
It is possible to interrupt the watcher from another thread with:
watcher = client.watch_pods
watcher.each do |notice|
# process notice data
end
# <- control will pass here after .finish is called
### In another thread ###
watcher.finish
You can specify version to start from, commonly used in "List+Watch" pattern:
list = client.get_pods
collection_version = list.resourceVersion
# or with other return formats:
list = client.get_pods(as: :parsed)
collection_version = list['metadata']['resourceVersion']
# note spelling resource_version vs resourceVersion.
client.watch_pods(resource_version: collection_version) do |notice|
# process notice data
end
It's important to understand the effects of unset/0/specific resource_version as it modifies the behavior of the watch — in some modes you'll first see a burst of synthetic 'ADDED' notices for all existing objects.
If you re-try a terminated watch again without specific resourceVersion, you might see previously seen notices again, and might miss some events.
To attempt resuming a watch from same point, you can try using last resourceVersion observed during the watch. Or do list+watch again.
Whenever you ask for a specific version, you must be prepared for an 410 "Gone" error if the server no longer recognizes it.
Events are entities in their own right.
You can use the field_selector
option as part of the watch methods.
client.watch_events(namespace: 'development', field_selector: 'involvedObject.name=redis-master') do |notice|
# process notice date
end
For example: delete_pod "pod name"
, delete_replication_controller "rc name"
, delete_node "node name"
, delete_secret "secret name"
Input parameter - name (string) specifying service name, pod name, replication controller name.
deleted = client.delete_service("redis-service")
If you want to cascade delete, for example a deployment, you can use the delete_options
parameter.
deployment_name = 'redis-deployment'
namespace = 'default'
delete_options = Kubeclient::Resource.new(
apiVersion: 'meta/v1',
gracePeriodSeconds: 0,
kind: 'DeleteOptions',
propagationPolicy: 'Foreground' # Orphan, Foreground, or Background
)
client.delete_deployment(deployment_name, namespace, delete_options: delete_options)
Such as: delete_pods
, delete_secrets
, delete_services
, delete_nodes
, delete_replication_controllers
, delete_resource_quotas
, delete_limit_ranges
, delete_persistent_volumes
, delete_persistent_volume_claims
, delete_component_statuses
, delete_service_accounts
Delete all entities of a specific type in a namespace, entities are returned in the same manner as for get_services
:
services = client.delete_services(namespace: 'development')
You can get entities which have specific labels by specifying a parameter named label_selector
(named labelSelector
in Kubernetes server):
pods = client.delete_pods(namespace: 'development', label_selector: 'name=redis-master')
You can specify multiple labels (that option will return entities which have both labels:
pods = client.delete_pods(namespace: 'default', label_selector: 'name=redis-master,app=redis')
There is also a ability to filter by some fields. Which fields are supported is not documented, you can try and see if you get an error...
client.delete_pods(namespace: 'development', field_selector: 'spec.nodeName=master-0')
With default (as: :ros
) return format, the returned object acts like an array of the individual pods, but also supports a .resourceVersion
method.
With :parsed
and :parsed_symbolized
formats, the returned data structure matches kubernetes list structure: it's a hash containing metadata
and items
keys, the latter containing the individual pods.
For example: create_pod pod_object
, create_replication_controller rc_obj
, create_secret secret_object
, create_resource_quota resource_quota_object
, create_limit_range limit_range_object
, create_persistent_volume persistent_volume_object
, create_persistent_volume_claim persistent_volume_claim_object
, create_service_account service_account_object
Input parameter - object of type Service
, Pod
, ReplicationController
.
The below example is for v1
service = Kubeclient::Resource.new
service.metadata = {}
service.metadata.name = "redis-master"
service.metadata.namespace = 'staging'
service.spec = {}
service.spec.ports = [{
'port' => 6379,
'targetPort' => 'redis-server'
}]
service.spec.selector = {}
service.spec.selector.name = "redis"
service.spec.selector.role = "master"
service.metadata.labels = {}
service.metadata.labels.app = 'redis'
service.metadata.labels.role = 'slave'
client.create_service(service)
For example: update_pod
, update_service
, update_replication_controller
, update_secret
, update_resource_quota
, update_limit_range
, update_persistent_volume
, update_persistent_volume_claim
, update_service_account
Input parameter - object of type Pod
, Service
, ReplicationController
etc.
The below example is for v1
updated = client.update_service(service1)
For example: patch_pod
, patch_service
, patch_secret
, patch_resource_quota
, patch_persistent_volume
Input parameters - name (string) specifying the entity name, patch (hash) to be applied to the resource, optional: namespace name (string)
The PATCH request should include the namespace name, except for nodes and namespaces entities.
The below example is for v1
patched = client.patch_pod("docker-registry", {metadata: {annotations: {key: 'value'}}}, "default")
patch_#{entity}
is called using a strategic merge patch. json_patch_#{entity}
and merge_patch_#{entity}
are also available that use JSON patch and JSON merge patch, respectively. These strategies are useful for resources that do not support strategic merge patch, such as Custom Resources. Consult the Kubernetes docs for more information about the different patch strategies.
This is similar to kubectl apply --server-side
(kubeclient doesn't implement logic for client-side apply). See https://kubernetes.io/docs/reference/using-api/api-concepts/#server-side-apply
For example: apply_pod
Input parameters - resource (Kubeclient::Resource) representing the desired state of the resource, field_manager (String) to identify the system managing the state of the resource, force (Boolean) whether or not to override a field managed by someone else.
Example:
service = Kubeclient::Resource.new(
metadata: {
name: 'redis-master',
namespace: 'staging',
},
spec: {
...
}
)
client.apply_service(service, field_manager: 'myapp')
Makes requests for all entities of each discovered kind (in this client's API group). This method is a convenience method instead of calling each entity's get method separately.
Returns a hash with keys being the singular entity kind, in lowercase underscore style. For example for core API group may return keys "node'
, "secret"
, "service"
, "pod"
, "replication_controller"
, "namespace"
, "resource_quota"
, "limit_range"
, "endpoint"
, "event"
, "persistent_volume"
, "persistent_volume_claim"
, "component_status"
, "service_account"
. Each key points to an EntityList of same type.
client.all_entities
You can get a complete URL for connecting a kubernetes entity via the proxy.
client.proxy_url('service', 'srvname', 'srvportname', 'ns')
# => "https://localhost.localdomain:8443/api/v1/proxy/namespaces/ns/services/srvname:srvportname"
Note the third parameter, port, is a port name for services and an integer for pods:
client.proxy_url('pod', 'podname', 5001, 'ns')
# => "https://localhost.localdomain:8443/api/v1/namespaces/ns/pods/podname:5001/proxy"
You can get the logs of a running pod, specifying the name of the pod and the namespace where the pod is running:
client.get_pod_log('pod-name', 'default')
# => "Running...\nRunning...\nRunning...\n"
If that pod has more than one container, you must specify the container:
client.get_pod_log('pod-name', 'default', container: 'ruby')
# => "..."
If a container in a pod terminates, a new container is started, and you want to
retrieve the logs of the dead container, you can pass in the :previous
option:
client.get_pod_log('pod-name', 'default', previous: true)
# => "..."
Kubernetes can add timestamps to every log line or filter by lines time:
client.get_pod_log('pod-name', 'default', timestamps: true, since_time: '2018-04-27T18:30:17.480321984Z')
# => "..."
since_time
can be a a Time
, DateTime
or String
formatted according to RFC3339
Kubernetes can fetch a specific number of lines from the end of the logs:
client.get_pod_log('pod-name', 'default', tail_lines: 10)
# => "..."
Kubernetes can fetch a specific number of bytes from the log, but the exact size is not guaranteed and last line may not be terminated:
client.get_pod_log('pod-name', 'default', limit_bytes: 10)
# => "..."
You can also watch the logs of a pod to get a stream of data:
client.watch_pod_log('pod-name', 'default', container: 'ruby') do |line|
puts line
end
Returns a processed template containing a list of objects to create. Input parameter - template (hash) Besides its metadata, the template should include a list of objects to be processed and a list of parameters to be substituted. Note that for a required parameter that does not provide a generated value, you must supply a value.
Note: This functionality is not supported by K8s at this moment. See the following issue
client.process_template template
A list that is always updated because it is it kept in sync by a watch in the background. Can also share a list+watch with multiple threads.
client = Kubeclient::Client.new('http://localhost:8080/api/', 'v1')
informer = Kubeclient::Informer.new(client, "pods", reconcile_timeout: 15 * 60, logger: Logger.new(STDOUT))
informer.start_worker
informer.list # all current pods
informer.watch { |notice| } # watch for changes (hides restarts and errors)
To pass custom options to the list and watch request, like setting namespace, the
initializer accepts options
keyword:
informer = Kubeclient::Informer.new(
client,
"pods",
options: { namespace: "some-namespace" },
reconcile_timeout: 15 * 60,
logger: Logger.new(STDOUT)
)
- Fork it ( https://github.com/[my-github-username]/kubeclient/fork )
- Create your feature branch (
git checkout -b my-new-feature
) - Test your changes with
rake test rubocop
, add new tests if needed. - If you added a new functionality, add it to README
- Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin my-new-feature
) - Create a new Pull Request
This client is tested with Minitest and also uses VCR recordings in some tests. Please run all tests before submitting a Pull Request, and add new tests for new functionality.
Running tests:
rake test