Kubernetes Node Daemonset
Dial OpenZiti services with a tunneler daemonset
Source Code
Requirements
Kubernetes: >= 1.20.0-0
Overview
You may use this chart to reach services node-wide via your Ziti network via DNS. For example, if you create a repository or container registry Ziti service, and your cluster has no internet access, you can reach those repositories or container registries via Ziti services.
NOTE: For one node kubernetes approaches like k3s, this works out-of-the-box and you can extend your coredns configuration to forward to the Ziti DNS IP, as you can see here. For multinode kubernetes installations, where your cluster DNS could run on a different node, you need to install the node-local-dns feature, which secures that the Ziti DNS name will be resolved locally, on the very same tunneler, as Ziti Intercept IPs can change from node to node. See this helm chart for a possible implementation.
How this Chart Works
This chart deploys a DaemonSet running the OpenZiti Linux tunneler, in transparent node-level proxy with DNS nameserver. The chart uses container image docker.io/openziti/ziti-edge-tunnel
which runs ziti-edge-tunnel run
.
Identity Storage Options
The chart supports two approaches for persisting the tunneler's Ziti identity:
-
PersistentVolumeClaim (Default): Identity is stored in a volume, allowing the tunneler to autonomously renew its certificate.
-
Existing Secret: Identity is mounted from a pre-created Kubernetes secret (read-only). This approach requires manual certificate management since the tunneler cannot write updates.
Installation
helm repo add openziti https://docs.openziti.io/helm-charts/
Identity Storage (Preferred) - Identity in PVC from Enrollment Token
Provide the enrollment token as a JWT. The identity will be enrolled on first run and stored in a PVC, allowing autonomous certificate renewal. This is preferred for security because only the one-time enrollment token must be orchestrated, and the private key is generated in-place during first-run enrollment.
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel --set-file zitiEnrollToken=/tmp/k8s-tunneler.jwt
Identity Storage Option - Pre-Enrolled Identity in PVC
Alternatively, you may supply a pre-enrolled identity as JSON. The identity will be stored in a PVC, allowing autonomous certificate renewal.
ziti-edge-tunnel enroll --jwt /tmp/k8s-tunneler.jwt --identity /tmp/k8s-tunneler.json
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel --set-file zitiIdentity=/tmp/k8s-tunneler.json
Identity Storage Option - Existing Secret
⚠️ Warning: This approach disables autonomous certificate renewal. You must manually manage certificate updates, private key rolling, etc.
Create the secret:
kubectl create secret generic k8s-tunneler-identity --from-file=zitiIdentity=k8s-tunneler.json
Deploy with the existing secret:
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel --set secret.existingSecretName=k8s-tunneler-identity
You may specify another Kubernetes Secret resource data key name if your existing secret does not store the identity under the key persisted-identity
:
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel \
--set secret.existingSecretName=k8s-tunneler-identity \
--set secret.keyName=myKeyName
PVC Configuration (Options 1 & 2 only)
When using PVC storage (Options 1 & 2), you can configure the PersistentVolumeClaim settings. The chart uses ReadWriteMany access mode by default for DaemonSet compatibility:
# Configure access mode (ReadWriteMany is default for multi-node DaemonSets)
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel \
--set-file zitiIdentity=/tmp/k8s-tunneler.json \
--set pvc.accessMode=ReadWriteOnce
# Configure storage class (uses cluster default if not set)
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel \
--set-file zitiIdentity=/tmp/k8s-tunneler.json \
--set pvc.storageClass=nfs
# Configure storage size (default is 2Gi)
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel \
--set-file zitiIdentity=/tmp/k8s-tunneler.json \
--set pvc.storageSize=5Gi
# Combine multiple PVC settings
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel \
--set-file zitiIdentity=/tmp/k8s-tunneler.json \
--set pvc.accessMode=ReadWriteMany \
--set pvc.storageClass=nfs \
--set pvc.storageSize=1Gi
Important: For DaemonSets running on multiple nodes, your storage class must support ReadWriteMany
access mode. If your cluster doesn't support RWX volumes, use Option 3 (existing secret) instead.
System Integration
D-Bus Socket Configuration
The tunneler can integrate with systemd-resolved via D-Bus to configure the node's system resolver automatically. This is enabled by default.
⚠️ Important: If systemd-resolved integration is disabled, you must manually configure the node's resolver to query the tunneler's nameserver first for proper DNS interception.
# Disable D-Bus integration (if causing issues)
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel \
--set-file zitiIdentity=/tmp/k8s-tunneler.json \
--set systemDBus.enabled=false
# Use different D-Bus socket path (if default doesn't exist)
helm install ziti-edge-tunnel openziti/ziti-edge-tunnel \
--set-file zitiIdentity=/tmp/k8s-tunneler.json \
--set systemDBus.systemDBusSocketMnt=/run/dbus/system_bus_socket
Common D-Bus socket paths:
/var/run/dbus/system_bus_socket
(default, most distributions)/run/dbus/system_bus_socket
(systemd-based systems)/var/lib/dbus/system_bus_socket
(some older systems)
Troubleshooting: If you see hostPath type check failed: ... is not a socket file
, either:
- Verify the correct D-Bus socket path on your nodes:
ls -la /var/run/dbus/
- Disable D-Bus integration with
--set systemDBus.enabled=false
Configure CoreDNS
If you want to resolve your Ziti domain inside the pods, you need to customize CoreDNS. See Official docs.
Multinode example
Customise ConfigMap that you apply for node-local-dns by appending the ziti specific domain and the upstream DNS server of ziti-edge-tunnel,
apiVersion: v1
kind: ConfigMap
metadata:
name: node-local-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
Corefile: |
your.ziti.domain:53 {
log
errors
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . 100.64.0.2
prometheus :9253
}
__PILLAR__DNS__DOMAIN__:53 {
errors
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . 100.64.0.2
prometheus :9253
health __PILLAR__LOCAL__DNS__:8080
}
in-addr.arpa:53 {
errors
cache 30
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
ip6.arpa:53 {
errors
cache 30
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . __PILLAR__CLUSTER__DNS__ {
force_tcp
}
prometheus :9253
}
.:53 {
errors
cache 30
reload
loop
bind __PILLAR__LOCAL__DNS__ __PILLAR__DNS__SERVER__
forward . __PILLAR__UPSTREAM__SERVERS__
prometheus :9253
}
Refer to the documentation of NodeLocal DNSCache on how to replace the values starting with two underscores and then apply it by,
kubectl apply -f nodelocaldns.yaml
One node example
Customize CoreDNS configuration,
kubectl -n kube-system apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
ziti.server: |
your.ziti.domain {
forward . 100.64.0.2
}
EOF
Reload CoreDNS config,
kubectl rollout restart -n kube-system deployment/coredns
Air gapped installations
For air gapped clusters, which mirrors their registries over this OpenZiti tunneler,
the upgrade will present the chicken-and-egg problem, and the DaemonSet will stay
at the ImagePullBackOff state for ever.
To work this problem around, you can install the prepull-daemonset
Helm chart, which can pull the new ziti-edge-tunnel
image Version needed in beforehand.
Once the image is present on every node, you can proceed to upgrade the tunneler without problems.
Values Reference
Key | Type | Default | Description |
---|---|---|---|
additionalVolumes | list | [] | additional volumes to mount to ziti-edge-tunnel container |
affinity | object | {} | |
dnsPolicy | string | "ClusterFirstWithHostNet" | |
fullnameOverride | string | "" | |
hostNetwork | bool | true | |
image.args | list | [] | |
image.command | list | [] | |
image.pullPolicy | string | "IfNotPresent" | |
image.registry | string | "docker.io" | |
image.repository | string | "openziti/ziti-edge-tunnel" | |
image.tag | string | "" | |
imagePullSecrets | list | [] | |
livenessProbe.exec.command[0] | string | "/bin/bash" | |
livenessProbe.exec.command[1] | string | "-c" | |
livenessProbe.exec.command[2] | string | `"if (ziti-edge-tunnel tunnel_status | jq '.Success'); then true; else false; fi"` |
livenessProbe.failureThreshold | int | 3 | |
livenessProbe.initialDelaySeconds | int | 180 | |
livenessProbe.periodSeconds | int | 60 | |
livenessProbe.successThreshold | int | 1 | |
livenessProbe.timeoutSeconds | int | 10 | |
log.timeFormat | string | "utc" | Set log time format, if set to "utc", then in UTC format, otherwise in milliseconds since the program has started. |
log.tlsUVLevel | int | 3 | TLSUV log level, from 0 to 6 (see README.md Reference) |
log.zitiLevel | int | 3 | Ziti log level, from 0 to 6 (see README.md Reference) |
nameOverride | string | "" | |
nodeSelector | object | {} | constrain worker nodes where the ziti-edge-tunnel pod can be scheduled |
podAnnotations | object | {} | |
podSecurityContext | object | {} | |
ports | list | [] | |
pvc.accessMode | string | "ReadWriteMany" | Access mode for the identity PVC (ReadWriteMany recommended for DaemonSets) |
pvc.storageClass | string | "" | Storage class for the identity PVC (uses cluster default if empty) |
pvc.storageSize | string | "2Gi" | Storage size for the identity PVC |
resources | object | {} | |
secret.existingSecretName | string | "" | Use an existing secret name (if set, disables PVC and certificate auto-renewal) |
secret.keyName | string | "persisted-identity" | Key name in the secret containing the identity JSON encoding |
securityContext.privileged | bool | true | |
serviceAccount.annotations | object | {} | Annotations to add to the service account |
serviceAccount.create | bool | true | Specifies whether a service account should be created |
serviceAccount.name | string | "" | The name of the service account to use. If not set and create is true, a name is generated using the fullname template |
spireAgent.enabled | bool | false | if you are running a container with the spire-agent binary installed then this will allow you to add the hostpath necessary for connecting to the spire socket |
spireAgent.spireSocketMnt | string | "/run/spire/sockets" | file path of the spire socket mount |
systemDBus.enabled | bool | true | Enable D-Bus socket connection for systemd-resolved integration |
systemDBus.systemDBusSocketMnt | string | "/var/run/dbus/system_bus_socket" | Host path to the system D-Bus socket (varies by distribution) |
tolerations | list | [] | |
zitiEnrollToken | string | "" | JWT to enroll a new identity and write in the PVC |
zitiIdentity | string | "" | JSON of an enrolled identity to write in the PVC |
helm upgrade {release} {source dir}
Log Level Reference
OpenZiti tunneler and TLSUV log levels are represented by integers, as follows,
Log Level | Value |
---|---|
NONE | 0 |
ERR | 1 |
WARN | 2 |
INFO (default) | 3 |
DEBUG | 4 |
VERBOSE | 5 |
TRACE | 6 |