Using NetFoundry for simple, secure AI
While NetFoundry securely delivers the most sophisticated AI use cases, such as the use of AI and MCP in healthcare, NetFoundry can also be used to simply and securely deliver more common AI use cases.
In minutes, use NetFoundry to get a private AI connection, without the hassle of VPNs, or dependencies on IP addresses and NAT. Here are some of the most common examples of using NetFoundry to securely deliver important AI use cases:
Example 1: Accessing a Local LLM from Anywhere
-
- Context: A developer is running a powerful LLM (e.g., Llama 3 via Ollama) on their desktop computer at home. They want to experiment with it and access its API from their laptop while traveling.
-
- Implementation: They install NetFoundry agents on both their home desktop and their laptop. Both devices are now part of a private NetFoundry overlay network with secure identities. From the laptop, they can access the Ollama API simply by using the desktop’s identity.
-
- Result: Secure, private access to a home-lab LLM without any complex firewall, NAT, port forwarding or dynamic DNS configuration.
Example 2: Securing a Self-Hosted LLM Web UI
-
- Context: A team sets up a web interface like ollama-webui or Chatbot-UI to interact with their internal LLM. They don’t want this UI to be accessible from the public internet, even with a password.
-
- Implementation: Install a NetFoundry agent on the server hosting the web UI and to the laptops and mobiles of team members. UI access is now seamless and their other sessions are not impacted.
-
- Result: The UI is completely inaccessible to and unreachable from the Internet or any underlay network This prevents credential stuffing attacks and unauthorized access. Optionally, NetFoundry can further simplify this by handling the certificates and encryption.
Example 3: Collaborative AI/ML Development Environment
-
- Context: A small research team is working on a project. The training dataset resides on a NAS in their office, the Jupyter notebooks are run on their individual laptops, and the model training is done on a powerful GPU instance in a cloud provider like Vast.ai or Lambda Labs.
-
- Implementation: They install NetFoundry on the NAS, their laptops, and the cloud GPU instance. Now all resources can communicate over a secure, private network using secure NetFoundry identities, which are independent of IP addresses and networks. The Jupyter notebook can directly mount the dataset from the NAS, and code can be seamlessly pushed to the GPU instance for training runs.
-
- Result: The team gets a secure, unified development environment across hybrid infrastructure without the overhead of setting up and managing VPNs or being dependent on IP addresses for identity or routing.
Example 4: Protect Public-Facing AI Application with Zero Trust Access
-
- Context: A company has built a custom “Ask our Docs” AI chatbot that is deployed on a server. They want to make it available to all employees, but not the general public, and they don’t want the server to be reachable from the Internet or underlay networks.
-
- Implementation: They point a public subdomain to NetFoundry Frontdoor. NetFoundry policy will require any user visiting the URL to first authenticate with their corporate identity provider.
-
- Result: Only authenticated employees can access the AI tool. The NetFoundry cloud protects the app from unauthorized use and bots.
Example 5: Hiding a Self-Hosted Inference API with NetFoundry Frontdoor
-
- Context: A research team has a fine-tuned model running on a server in their office. They don’t have a static IP and their firewall blocks all inbound traffic. They need to provide API access to a partner.
-
- Implementation: They install the NetFoundry Frontdoor daemon on the server. The daemon creates a secure, persistent, outbound-only tunnel from their server to the NetFoundry overlay, specifically for this API.
-
- Result: The partner can send API requests to the public hostname, and NetFoundry securely tunnels the traffic to the office server. The office firewall remains completely locked down, with no inbound access.
Example 6: Developing a Chatbot with Local LLM and Cloud Webhooks
-
- Context: A developer is building a Microsoft Teams bot that gets its intelligence from a locally running instance of Ollama. To receive messages from Teams, their local application needs a publicly accessible HTTPS endpoint to receive webhook calls.
-
- Implementation: The developer runs their bot application on localhost. NetFoundry Frontdoor provides a public URL that securely tunnels traffic to their local application, but requires NetFoundry-enabled IdP authentication (or another method which the developer team can choose). They paste this URL into the Teams developer portal.
-
- Result: The developer can easily test the full end-to-end flow of their AI bot in real-time without deploying their code to a cloud server.
Example 7: Live Demo of a Gradio/Streamlit AI App, run locally
-
- Context: A data scientist has created an interactive AI application using Gradio to showcase a new image generation model. They want to show it to a colleague in a different office or location for immediate feedback.
-
- Implementation: They run the Gradio app locally, which starts a web server. NetFoundry restricts web server access to localhost and provides a public, authentication protected URL.
-
- Result: Colleagues interact with the AI application live in their browser, while it runs entirely on the data scientist’s laptop. This is faster and easier than containerizing the app and deploying it to the cloud for a quick demo.
Example 8: Securing a Temporary Shared Endpoint with OAuth
-
- Context: A dev wants to share their local LLM-powered data analysis tool with a few trusted colleagues for a day or short amount of time.
-
- Implementation: They run the NetFoundry daemon locally at the shared endpoint site, and map a URL.
- Result: When colleagues visit the URL, they are first forced to authenticate with an OAuth provider. NetFoundry only forwards the request upon successful authentication.
To use NetFoundry to securely deliver your AI use cases, start here with a
free trial.