Support stack overview
The NetFoundry support stack gives you observability and troubleshooting capabilities by collecting telemetry and events from across your network. It requires a private access token from NetFoundry and deploys via a proprietary Helm chart. Components are pre-configured to work together, providing telemetry dashboards and searchable logs out of the box.
Stack components
- Elasticsearch: Stores and indexes telemetry and log data, and exposes an API for querying it.
- Logstash: Processes data from RabbitMQ and writes it to Elasticsearch. Can be extended to stream metrics to additional systems.
- Kibana: Web UI for searching and browsing raw log and telemetry data in Elasticsearch. Ships pre-configured with organized data sources.
- Grafana: Industry-standard dashboard tool for viewing and analyzing metrics. Ships pre-configured with all data sources and the dashboards used internally at NetFoundry.
- RabbitMQ: Buffers network metrics and events before Logstash processes them. You can add additional queues to stream metrics to your own systems.
- Beats agents: Micro-containers deployed as a DaemonSet on all Kubernetes nodes. They collect logs and metrics
from all pods in the
zitiandsupportnamespaces and ship them to Elasticsearch. - OpenZiti tunnelers (
ziti-edge-tunnel, optional): By default, none of the support tools are exposed externally. You control how they're exposed, but we recommend accessing them over the OpenZiti network using an OpenZiti tunneler. This eliminates extra open ports and satisfies most compliance requirements.
Resource requirements
The support stack requires a minimum of 4 CPU cores available in the cluster. The installer checks available cores before installation and skips the support stack if this threshold is not met. All other OpenZiti components still install normally.
Storage and index lifecycle defaults vary based on deployment type:
| Setting | K3s (single-node) | Multi-node cluster |
|---|---|---|
| Elasticsearch storage | 30Gi | 100Gi |
| Index rollover size | 1GB | 3GB |
| Index rollover age | 1d | 3d |
| Max index age | 7d | 7d |
| RabbitMQ storage | 3Gi | 3Gi |
Data flow
Telemetry and event data flows through the support stack as follows:
- The controller publishes events and metrics via AMQP to RabbitMQ.
- Logstash consumes messages from RabbitMQ, processes them, and writes to Elasticsearch.
- Beats agents collect container logs and metrics from all pods in the
zitiandsupportnamespaces and ship them to Elasticsearch. - Grafana and Kibana read from Elasticsearch to provide dashboards and searchable log views.
Configuration
The installer generates a support-values.yml file with default settings based on your deployment type (K3s or
production). If the file already exists, it is left unchanged. Key settings:
elasticsearch:
node_count: 1
node_storage: 100Gi # 30Gi for K3s
max_index_age: 7d
rollover_index_age: 3d # 1d for K3s
rollover_index_size: 3GB # 1GB for K3s
rabbitmq:
storage: 3Gi
node_count: Number of Elasticsearch nodes. Increase for production high-availability.node_storage: Persistent volume size per Elasticsearch node.max_index_age: Maximum age before an index is deleted by the ILM policy.rollover_index_age/rollover_index_size: Triggers for creating a new index when the current one reaches the specified age or size.rabbitmq.storage: Persistent volume size for the RabbitMQ message buffer.
To apply configuration changes after installation:
helm upgrade --install support ./helm-charts/support/ --values support-values.yml -n support
External Elasticsearch
If you already have an Elasticsearch cluster, you can connect the support stack to it instead of deploying one via ECK. This reduces resource usage and avoids running a second Elasticsearch instance.
Kibana is automatically disabled in external mode because the ECK Kibana CR requires an ECK-managed Elasticsearch instance. Use your existing Kibana or access Elasticsearch directly.
Installation
Set the following environment variables before running the quickstart:
export EXTERNAL_ELASTICSEARCH=true
export EXTERNAL_ES_URL="https://my-es.example.com:9200"
export EXTERNAL_ES_USERNAME="elastic"
export EXTERNAL_ES_PASSWORD="YOUR_PASSWORD"
Then run the quickstart as normal:
nf-quickstart
The installer automatically:
- Skips the ECK operator and CRD installation
- Creates a Kubernetes secret (
external-es-creds) with your credentials - Generates a
support-values.ymlconfigured for external Elasticsearch - Sets up ILM policies and index templates on your external cluster
If your external Elasticsearch uses a publicly-trusted certificate (e.g., Elastic Cloud), no additional configuration is needed. If it uses a private CA certificate, also set:
export EXTERNAL_ES_CA_FILE="/path/to/ca.crt"
The installer creates the CA secret automatically from this file.
What changes in external mode
- The ECK operator and CRDs are not installed.
- No Elasticsearch or Kibana custom resources are created.
- Logstash, Grafana, and ILM jobs connect directly to your external Elasticsearch.
- Uninstall skips ECK cleanup automatically.
Switching to external Elasticsearch after installation
If you already have a default installation and want to switch to an external Elasticsearch cluster, update your
support-values.yml:
elasticsearch:
enabled: false
external:
url: "https://my-es.example.com:9200"
credentialSecret: "my-es-creds"
usernameKey: "username" # default
passwordKey: "password" # default
# Set to "" for publicly-trusted certificates
# Set to a secret name if your ES uses a private CA (see below)
tlsCaSecret: ""
Create the credential secret:
kubectl create secret generic my-es-creds \
--from-literal=username=elastic \
--from-literal=password='YOUR_PASSWORD' \
-n support
Then upgrade:
helm upgrade --install support ./helm-charts/support/ --values support-values.yml -n support
If your external Elasticsearch uses a private CA, create a CA secret and set tlsCaSecret to its name:
kubectl create secret generic my-es-ca \
--from-file=ca.crt=/path/to/ca.crt \
-n support
elasticsearch:
tlsCaSecret: "my-es-ca"
After upgrading, you can remove the ECK-managed Elasticsearch and Kibana resources and uninstall the ECK operator if it is no longer needed.
Accessing support tools
By default, none of the support tools are exposed externally. The installer creates Ziti services for each tool, making them accessible over the OpenZiti network at the following intercept addresses:
| Tool | Intercept address | Port |
|---|---|---|
| Grafana | grafana.ziti | 80 (HTTP) |
| Kibana | kibana.ziti | 443 (HTTPS) |
| Elasticsearch | elasticsearch.ziti | 443 (HTTPS) |
The installer generates a support-user.jwt enrollment token for client access. To connect:
- Enroll the
support-user.jwttoken with an OpenZiti client (Desktop Edge, mobile tunneler, or CLI tunneler). - Once connected, access the tools at the intercept addresses above (e.g.,
http://grafana.zitiin your browser).
This approach eliminates the need for any externally exposed ports and satisfies most compliance requirements.
Default credentials
| Tool | Username | Password |
|---|---|---|
| Grafana | admin | admin (you'll be prompted to change this on first login) |
| Elasticsearch / Kibana | elastic | Auto-generated during installation |
To retrieve the Elasticsearch / Kibana password:
kubectl get secrets "elasticsearch-es-elastic-user" -n support \
-o go-template='{{index .data "elastic" | base64decode}}'
Related tools
- Database snapshots: The quickstart automatically deploys a scheduled job that captures the OpenZiti controller database daily. See Back up your installation for configuration and restore instructions.
- Support bundle: Run
nf-support-bundleto collect recent logs and stack dumps from the controller and router into a zip file for sending to NetFoundry support. See Collect diagnostics for NetFoundry support for usage and optional flags.