Skip to main content

Apply NetworkPolicies

NetworkPolicies restrict pod-to-pod communication to only the traffic that is required. This is a recommended security hardening step for production deployments, particularly those requiring STIG compliance.

note

NetworkPolicies require a CNI plugin that supports them (e.g., Calico, Cilium, Weave, Antrea, or Canal). The default k3s CNI (Flannel) does not enforce NetworkPolicies. If your CNI does not support them, the policies will be created but have no effect.

Prerequisites

  • A running NetFoundry Self-Hosted installation with the support and ziti namespaces
  • A Kubernetes CNI that supports NetworkPolicies
  • kubectl access to the cluster

Automated deployment

The quickstart installer can apply NetworkPolicies automatically when run with the -H (hardened) flag on a BYO cluster (non-k3s):

CTRL_ADDR=<your-hostname> bash quickstart.sh -y -H

The installer will:

  1. Detect your CNI plugin and warn if NetworkPolicy support is not confirmed
  2. Apply default-deny ingress policies with allow-list rules for both namespaces
  3. Validate connectivity between components after applying
  4. Automatically roll back if connectivity checks fail

Manual deployment

To apply NetworkPolicies on an existing installation without re-running the installer:

./installers/network-policies.sh

Or apply the manifests directly:

kubectl apply -f installers/hardened/support-networkpolicies.yaml
kubectl apply -f installers/hardened/ziti-networkpolicies.yaml

What gets applied

Support namespace

PolicyFromToPort
Default denyAll podsAll (denied)
Logstash → Elasticsearchlogstashelasticsearch9200
Grafana → Elasticsearchgrafanaelasticsearch9200
Kibana → Elasticsearchkibanaelasticsearch9200
ES inter-nodeelasticsearchelasticsearch9200, 9300
ECK operator → Elasticsearchelastic-system namespaceelasticsearchAll
ECK operator → Kibanaelastic-system namespacekibanaAll
Logstash → RabbitMQlogstashrabbitmq5672
Ziti → RabbitMQziti namespacerabbitmq5672
RabbitMQ → Logstashrabbitmqlogstash5010

Ziti namespace

PolicyFromToPort
Default denyAll podsAll (denied)
Controller ingressAnyziti-controller1280
Router → Controllerziti-routerziti-controller6262
Router edge ingressAnyziti-router3022
cert-managercert-manager namespaceAll podsAll

Customizing policies

The policy manifests are located at:

  • installers/hardened/support-networkpolicies.yaml
  • installers/hardened/ziti-networkpolicies.yaml

To add custom rules (for example, allowing external access to Kibana or Grafana), create additional NetworkPolicy resources in the appropriate namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-grafana-ingress
namespace: support
spec:
podSelector:
matchLabels:
app: grafana
policyTypes:
- Ingress
ingress:
- ports:
- protocol: TCP
port: 3000

Rolling back

To remove all NetworkPolicies and restore unrestricted communication:

kubectl delete -f installers/hardened/support-networkpolicies.yaml
kubectl delete -f installers/hardened/ziti-networkpolicies.yaml

Troubleshooting

If pods cannot communicate after applying policies, check which policies are active:

kubectl get networkpolicies -n support
kubectl get networkpolicies -n ziti

Verify that your CNI is actually enforcing policies. A common issue is that Flannel (default k3s CNI) accepts NetworkPolicy resources without enforcing them. To confirm enforcement, check whether the default-deny policy actually blocks traffic that is not explicitly allowed.

To re-run the automated deployment with connectivity validation:

./installers/network-policies.sh

The script will apply policies and test connectivity. If checks fail, it will roll back automatically and print instructions for manual investigation.