Skip to main content
Support OpenZiti, give us a GitHub Star Star

Deploy a Hosting Tunneler in Kubernetes

Version: 1.2.1 Type: application AppVersion: 1.7.19

Reverse proxy cluster services with an OpenZiti tunneler pod

Requirements

Kubernetes: >= 1.20.0-0

Overview

You may use this chart to publish cluster services to your Ziti network. For example, if you create a Ziti service with a server address of tcp:kubernetes.default.svc:443 and write a Bind Service Policy assigning the service to the Ziti identity used with this chart, then your Ziti network's authorized clients will be able access this cluster's apiserver. You could do the same thing for any cluster service's domain name.

How this Chart Works

This chart deploys a pod running ziti-edge-tunnel, the OpenZiti Linux tunneler, in service hosting mode. The chart uses container image docker.io/openziti/ziti-host which runs ziti-edge-tunnel run-host. This puts the Linux tunneler in "hosting" mode which is useful for binding Ziti services without any need for elevated permissions and without any Ziti nameserver or intercepting proxy. You'll be able to publish any server that is known by an IP address or domain name that is reachable from the pod deployed by this chart.

The enrolled Ziti identity JSON is persisted in a volume, and the chart will migrate the identity from a secret to the volume if the legacy secret exists.

Installation

helm repo add openziti https://docs.openziti.io/helm-charts/

After adding the charts repo to Helm then you may enroll the identity and install the chart. You may supply a Ziti identity JSON file when you install the chart. This approach enables you to use any option available to the ziti edge enroll command.

ziti edge enroll --jwt /tmp/k8s-tunneler.jwt --out /tmp/k8s-tunneler.json
helm install ziti-host openziti/ziti-host --set-file zitiIdentity=/tmp/k8s-tunneler.json

Alternatively, you may supply the JWT directly to the chart. In this case, a private key will be generated on first run and the identity will be enrolled.

helm install ziti-host openziti/ziti-host --set-file zitiEnrollToken=/tmp/k8s-tunneler.jwt

Installation using an existing secret

Warning: this approach does not allow the tunneler to autonomously renew its identity certificate, so you must renew the identity certificate out of band and supply it as an existing secret.

Create the secret:

kubectl create secret generic k8s-tunneler-identity --from-file=persisted-identity=k8s-tunneler.json

Deploy the Helm chart, referring to the existing secret:

helm install ziti-host openziti/ziti-host --set secret.existingSecretName=k8s-tunneler-identity

If desired, change the key name persisted-identity with --set secret.keyName=myKeyName.

Identity Directory and Volume

The Ziti identity is stored in a directory inside the container, which is backed by a PersistentVolumeClaim (PVC) by default. This ensures that identity renewals and updates are preserved across pod restarts. If you use an existing secret instead, the identity directory will be read-only, and renewals will not be persisted.

Warning: If the identity directory is not writable or not backed by a persistent volume, identity renewals and updates will NOT be preserved across container restarts.

Values Reference

KeyTypeDefaultDescription
additionalVolumeslist[]additional volumes to mount to ziti-host container
affinityobject{}pod affinity
dnsPolicystring"ClusterFirstWithHostNet"
fullnameOverridestring""override full name for the Helm Release
hostNetworkboolfalsebool: host network mode
image.argslist[]additional container command arguments, e.g., ["--verbose"]
image.pullPolicystring"IfNotPresent"container image pull policy
image.repositorystring"openziti/ziti-host"
image.tagstring""overrides the image tag whose default is the chart appversion.
imagePullSecretslist[]secrets containing OCI registry credentials for image pull
nameOverridestring""override name for the Helm Release
nodeSelectorobject{}node selector for pod scheduling
podAnnotationsobject{}
podSecurityContext.fsGroupint65534int: fsGroup for podSecurityContext (default: nogroup)
podSecurityContext.runAsGroupint65534int: GID to run the container as (default: nogroup)
podSecurityContext.runAsUserint65534int: UID to run the container as (default: nobody)
portslist[]additional ports to expose
replicasint1number of replicas
resourcesobject{}We usually recommend not to specify default resources and to leave this as a conscious choice for the user. This also increases chances charts run on environments with little resources, such as Minikube. If you do want to specify resources, uncomment the following lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi
secret.existingSecretNamestring""existing secret name to use
securityContext.capabilities.addlist[]capabilities to add to the container, e.g., ["NET_BIND_SERVICE"]
serviceAccount.annotationsobject{}Annotations to add to the service account
serviceAccount.createbooltrueSpecifies whether a service account should be created
serviceAccount.namestring""The name of the service account to use. If not set and create is true, a name is generated using the fullname template
spireAgentobject{"enabled":false,"spireSocketMnt":"/run/spire/sockets"}spire agent configuration
spireAgent.enabledboolfalseif you are running a container with the spire-agent binary installed then this will allow you to add the hostpath necessary for connecting to the spire socket
spireAgent.spireSocketMntstring"/run/spire/sockets"file path of the spire socket mount
tolerationslist[]tolerations for pod scheduling
helm upgrade {release} {source dir}