Why Jenkins?

Jenkins is a widely adopted automation tool for building software with a broad array of plugins and integrations with key DevOps tools like Docker and Kubernetes. Here at NetFoundry, we started out using Jenkins to automate our builds, run tests, and promote releases. Eventually, we expanded to use CI as a service too. There were some builds that needed to happen in GitHub or BitBucket. Some jobs were not as readily migrated to a new CI system, and why bother? It made more sense to keep running those jobs in Jenkins, and so a new problem was introduced: how do we trigger Jenkins jobs from an external system like GitHub?

In our case, the Jenkins jobs needed certain parameters not available in GitHub’s built-in webhooks. We created a custom Jenkins-style webhook generator that we could use in GitHub, but the core problem remained: anyone could still “see” our Jenkins server on the internet. This exposure was intentional, but it could have become a serious problem if our server had been the target of a network-launched attack. One layer of security didn’t seem like enough protection for something so valuable. We knew that we wanted to use our home-grown networking framework OpenZiti to solve this kind of problem, so we used the OpenZiti SDK for Node.JS to create a GitHub Action to send a webhook securely via OpenZiti instead of over the internet.

This Diagram shows a GitHub event triggering a job on an exposed, visible Jenkins server

Checkmate: Jenkins is a High-Value Target

I would be surprised if a resourceful attacker failed to deliver a damaging blow after finding a foothold on a Jenkins server. If this were chess, then Jenkins is the queen. One of the reasons the target value is so high is that Jenkins tends to store powerful credentials needed to deploy cloud infrastructure and applications automatically. A compromised Jenkins server is also an opportunity to introduce malicious code into a build. If successful, that infected build could be distributed through the trusted supply chain.

The tools for attacking Jenkins have become more sophisticated and accessible, making Jenkins a softer target. Popularity is a double-edged sword. The rule of thumb for defensive security is to balance the scales with target value on one side and defensive measures on the other. As our Head of Security Mike Gorman likes to say, “You don’t need a $100 lock for a $10 bike.” We knew that losing control of Jenkins would be a severe incident, so we got to work.


A Quick Inventory of the Problems and Alternatives

We knew we’d need to solve several problems before taking the Jenkins server off the public internet. 

How will users access the UI?

How will administrators access SSH?

How will we receive webhooks from external CI systems?

In times past, I’ve used tools like SSH tunnels, VPNs, virtual desktops, and webhook relays to solve these kinds of problems. It’s simpler to make the server intentionally public and trust that the login method will prevent unauthorized use, but that’s just one layer of security. I’m guessing this partially explains why Mikail Tunç found 25K Jenkins servers directly exposed to the internet during his 2018 survey of the internet via Shodan.

Those traditional solutions like VPNs and relays incur hidden, ongoing costs associated with the complexity and rigidity of a configuration that is not tightly coupled to the application’s needs. This slows down work that relies on those solutions because things often break when changes are made. The server is separate from the security, so they always have to be configured separately, which is inherently fragile. Despite all the care taken to make a server private from the internet, the core problem remains. While the application server is still listening on any network port, it is vulnerable to attack.

I’ve personally spent my fair share of time troubleshooting and managing bolt-on security measures like those VPNs. I came to regard them as a necessary evil. Whenever a problem arose with an IPSec tunnel, it was always my top priority because of the high costs associated with a loss of availability. It would have been so much better if those solutions didn’t catch on fire in the first place, and ideal if they were not even necessary.

In the next section, I’ll explain how these problems are solved by combining a strong identity with a service policy. This allows us to keep servers invisible on the network unless a caller has permission to connect.

Why use an Overlay? Identity vs Network Perimeter

OpenZiti is an example of a networking overlay that uses identities instead of a network perimeter as a basis for access control. So what is an identity-based control vs network perimeter control? Metaphorically, basing security on a private network perimeter is like saying “You’re allowed to knock on my front door if you can look up my address in a public directory and walk into my yard.” whereas requiring an identity for an invisible service is like saying “You’re allowed to knock on my front door if I sent you the secret map to my neighborhood and you have your own unique PIN for the front gate.” The identity that permits you to knock on the door is separate from the login credential, like the key that lets you open the front door.

A familiar example of network perimeter control would be limiting access to the Jenkins UI login prompt to private user network addresses on a VPN or the VPC’s private subnets attached to an SSH bastion. An example of an identity control is that your device was issued a certificate that allows you to access the Jenkins server’s login prompt from anywhere. In other words, it’s who you are (your verifiable identity), not where you are (your address with respect to the perimeter).

How Do Users Log in to Invisible Jenkins?

Everyone that needs to log in to Jenkins runs an OpenZiti tunneler on their computer. Each tunneler has an identity installed on the same computer. The OpenZiti network administrator issues these identities via email or chat in the form of one-time enrollment tokens and assigns role attributes that match up with a service policy. The service policies grant permission for those tunneler identities to use Jenkins. We control the service policies through a web console or CLI. As soon as the policy grants access to an enrolled identity, the tunneler automatically configures a private domain name and IP route to the Jenkins server on that user’s computer.

These new controls completely replaced the global DNS and public IP address of the Jenkins server. The Jenkins server stopped listening on an attached network interface address (e.g., and is instead only exposed to other processes running on the same device through the loopback interface (e.g., The OpenZiti “tunneler” running on the Jenkins server acts like a gatekeeper allowing connections only from OpenZiti identities that are authorized by an OpenZiti service policy.

In an OpenZiti network, your servers and clients are secure network peers and you control the direction of access and which applications are allowed.

How Does Invisible Jenkins Receive Webhooks From GitHub?

Here are two examples of Jenkins jobs in our environment. Both are triggered by a GitHub webhook that is sent via OpenZiti instead of the open internet. I used the OpenZiti Webhook “Action” from GitHub Actions Marketplace in both of these cases. This is a Node.JS application that uses the OpenZiti SDK to contact the Jenkins API in the same way the OpenZiti tunnelers do that we installed on our workstations to reach the Jenkins UI.

This diagram shows a GitHub event triggering a job on an invisible Jenkins server via OpenZiti

The Simple Jenkins Job: Just Poll GitHub For Changes

The simplest example of the OpenZiti webhook is our Jenkins job that needs to poll for changes in a GitHub repo whenever commits are pushed to GitHub. This event triggers a workflow that sends the webhook to Jenkins. This webhook triggers the Jenkins job to poll for changes in the GitHub repo. This is a more secure alternative to sending a GitHub webhook to Jenkins over the internet.

Jenkins can poll a Git remote according to a schedule, but waiting for a scheduled job to run also means waiting to learn whether the build will succeed. To shift that build pain as early as possible I needed the Jenkins job to run as soon as the GitHub event occurred. We were using a custom Jenkins-style webhook generator in GitHub, and so I swapped in the OpenZiti webhook Action in its place. The webhook payload/context sent by the OpenZiti webhook Action is the same as GitHub’s built-in webhook generator. Still, the OpenZiti webhook Action runs as a step in a GitHub Actions workflow instead of through the GitHub Actions settings. Here’s an example workflow step that sends a webhook via OpenZiti. I’ll explain the three inputs right after the example.

name: ziti-webhook-action
on: [ push ]
    runs-on: ubuntu-latest
    name: OpenZiti Webhook Action
    - uses: openziti/ziti-webhook-action@v2
        ziti-id: ${{ secrets.ZITI_JENKINS_IDENTITY }}
        webhook-url: 'https://someapp.ziti//plugins/github/webhook'
        webhook-secret: ${{ secrets.ZITI_JENKINS_WEBHOOK_SECRET }}

This is a GitHub Actions workflow that runs on Git push events. The workflow will send the full GitHub event context to the specified URL via OpenZiti instead of the internet. The setup to enable this is as follows:

  1. Create an identity each for:
    1. Jenkins server
    2. Webhook workflow
  2. Create a service that defines the URL to receive the webhook.
  3. Install an OpenZiti tunneler on the Jenkins host with the first identity you created for Jenkins.
  4. Create Actions secrets for the repo with the three required values:
    1. ziti-id: the JSON of the second enrolled identity you created for this workflow.
    2. webhook-url: The URL of Jenkins including the service’s OpenZiti domain name, e.g. “someapp.ziti”
    3. webhook-secret: A random string that’s only saved in the Actions secret and is used by the webhook sender to compute a data integrity hash for the payload data.

The Slightly More Complex Jenkins Job: Adding in Build Parameters

This job required a little extra setup for Jenkins and GitHub. The result is that I can send custom Jenkins job build parameters inside the same GitHub-style webhook via OpenZiti.

  1. A Git push event triggers the GitHub workflow.
  2. The workflow has a step that uses the OpenZiti webhook.
  3. The OpenZiti webhook accepts “inputs”. We send the Git branch name, the committer, and the computed version as extra data fields to the webhook Action as workflow inputs.
  4. The webhook uses OpenZiti to send the combined GitHub context and extra inputs to Jenkins.
  5. The webhook arrives via the OpenZiti tunneler to the Jenkins server on and is parsed by the “generic webhook trigger” plugin for Jenkins.
  6. The JSON fields are parsed as Jenkins job parameters with JSONPath by the Jenkins plugin.

The “generic webhook trigger” adds a new REST endpoint in the Jenkins server that functions similarly to the Jenkin server’s built-in webhook API. I didn’t modify this Jenkins plugin, but if I had imported the OpenZiti SDK to the plugin, it would have simplified my solution even further by eliminating the need for a running OpenZiti tunneler on the Jenkins server. Here’s a sample workflow copied below that has two triggers: workflow_call and workflow_dispatch. The ziti-version input to these triggers is copied to the data input of the webhook Action along with the Git branch name from the GitHub context.

name: Jenkins Smoketest
        type: string
        required: true
        type: string
        required: false

    runs-on: ubuntu-latest
    name: Jenkins On-Demand Smoketest
      - name: Send Webhook to Jenkinstest
        uses: openziti/ziti-webhook-action@v2
          ziti-id: ${{ secrets.ZITI_JENKINS_IDENTITY }}
          webhook-url: ${{ secrets.ZITI_JENKINS_WEBHOOK_URL }}
          webhook-secret: ${{ secrets.ZITI_JENKINS_WEBHOOK_SECRET }}
          data: |
            ziti-version=${{ inputs.ziti-version || github.event.inputs.ziti-version }}
            branch=${{ github.head_ref || github.ref }}

Taking it to the Next Level

I described a solution for the existing Jenkins users to continue using Jenkins after we switched off the internet visibility of the server. I also explained the configuration in GitHub and Jenkins for two jobs that are both triggered by a GitHub event.

What I’ve done here for Jenkins is highly similar to what we did for our SSH bastions. As you can see, this is a familiar pattern here at NetFoundry! No more mystery connections, no more presumed trust for any address or zone, including private addresses. This problem and the solution are equally applicable to GitHub or GitLab. There is already an OpenZiti webhook plugin available for both!

The OpenZiti maintainers are reachable at openziti.org. You can sign up for a free-forever NetFoundry Teams plan to get up and running instantly with a hosted, managed, opinionated deployment of OpenZiti: nfconsole.io/signup.

Please give our GitHub repo a GOLD STAR to let us know you’re glad OpenZiti is a thing, and let us know what you’re building in Discourse so we can brag on it and help out!

Developer Advocate at NetFoundry | Website

Ken is crafting developer experiences with the NetFoundry API and OpenZiti. He is enthusiastic about Linux, security, and building things with free and open source software and hardware. You can find him in his native habitat talking and clowning around at tiny tech events all over.

Discuss On: